Based on my previous post: [TensorFlow Lite] My first try with TensorFlow Lite ,
I am just curious about the performance of TensorFlow Lite on the different platforms so that I do a performance experiment for it.
There are a X86 server (no GPU enabled) and a Nvidia TX2 board ( which is ARM based ) to do the experiment. They both run TFLite’s official example using Inception_v3 model and get average inference time as the following table.
Hardware Information:
Nvidia TX2: 4 CPU cores and 1 GPU ( but does not use )
X86 server: 24 CPU cores and no GPU enabled
The performance result:
In sum:
Obviously, the more CPU cores it has, the more performance it gains.
Nvidia TX2 running the example with INT8 Quantization has better performance than without Quantization. But, it is weird that X86 server gets worse performance instead.
I am just curious about the performance of TensorFlow Lite on the different platforms so that I do a performance experiment for it.
There are a X86 server (no GPU enabled) and a Nvidia TX2 board ( which is ARM based ) to do the experiment. They both run TFLite’s official example using Inception_v3 model and get average inference time as the following table.
Hardware Information:
Nvidia TX2: 4 CPU cores and 1 GPU ( but does not use )
X86 server: 24 CPU cores and no GPU enabled
The performance result:
In sum:
Obviously, the more CPU cores it has, the more performance it gains.
Nvidia TX2 running the example with INT8 Quantization has better performance than without Quantization. But, it is weird that X86 server gets worse performance instead.
No comments:
Post a Comment