I just take my first try with the example: label_image (tensorflow/contrib/lite/examples/label_image) in TensorFlow Lite and write down the commands that I used.
There are a bunch of information from the offical TensorFlow Lite guide:
https://www.tensorflow.org/lite/guide
1. convert the example of model to tflite format
https://stackoverflow.com/questions/47463204/tensorflow-lite-convert-error-for-the-quantized-graphdef
2. Build the Lite example from : tensorflow/contrib/lite/examples/label_image
3. Run the example ( for this step, I should run it on ARM based devices like: Android phone, Raspberry Pi or iOS. )
resolved reporter
invoked
average time: 72.7091 ms
0.699143: 653 military uniform
0.0383277: 668 mortarboard
0.0223822: 401 academic gown
0.0187004: 835 suit
0.0142861: 458 bow tie
Reference:
There are some testing performance results on ARM based board
https://www.twblogs.net/a/5b7c96622b71770a43dbb287
Optional: Use visualized tool ==> Visualize
And the result is something like the following picture:
More information in build TensorFlow Lite on ARM based OS:
http://www.guodongkeji.com/4g-show-25-2774-1.html
There are a bunch of information from the offical TensorFlow Lite guide:
https://www.tensorflow.org/lite/guide
1. convert the example of model to tflite format
tflite_convert --graph_def_file ./inception_v3_2016_08_28_frozen.pb \
--output_file ./inception_v3.tflite \
--input_arrays=input \
--output_arrays=InceptionV3/Predictions/Reshape_1
orexport INPUT_FILE="tensorflow/examples/label_image/data/inception_v3_2016_08_28_frozen.pb"
export OUTPUT_FILE="tensorflow/examples/label_image/data/inception_v3.tflite"
export INPUT_SHAPE="1,299,299,3"
export INPUT_ARRAY=input
export OUTPUT_ARRAY=InceptionV3/Predictions/Reshape_1
#export STD_VALUE=127.5
#export MEAN_VALUE=127.5
bazel build tensorflow/contrib/lite/toco:toco && \
./bazel-bin/tensorflow/contrib/lite/toco/toco \
--input_file=${INPUT_FILE} \
--output_file=${OUTPUT_FILE} \
--input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE \
--input_shape=${INPUT_SHAPE} \
--input_array=${INPUT_ARRAY} \
--output_array=${OUTPUT_ARRAY} \
--inference_type=FLOAT
#--inference_type=QUANTIZED_UINT8
#--std_value=${STD_VALUE} --mean_value=${MEAN_VALUE}
P.S: I cannot succeed to use inference_type=QUANTIZED_UINT8 is because of two issues in this file: inception_v3_2016_08_28_frozen.pbhttps://stackoverflow.com/questions/47463204/tensorflow-lite-convert-error-for-the-quantized-graphdef
2. Build the Lite example from : tensorflow/contrib/lite/examples/label_image
bazel build --config=opt //tensorflow/contrib/lite/examples/label_image --cxxopt="-std=c++11" --copt="-O3"
3. Run the example ( for this step, I should run it on ARM based devices like: Android phone, Raspberry Pi or iOS. )
bazel-bin/tensorflow/contrib/lite/examples/label_image/label_image \
-m tensorflow/examples/label_image/data/inception_v3.tflite \
-l tensorflow/examples/label_image/data/imagenet_slim_labels.txt \
-i tensorflow/contrib/lite/examples/label_image/testdata/grace_hopper.bmp \
-c 100 \
-t 20
Result:resolved reporter
invoked
average time: 72.7091 ms
0.699143: 653 military uniform
0.0383277: 668 mortarboard
0.0223822: 401 academic gown
0.0187004: 835 suit
0.0142861: 458 bow tie
There are some testing performance results on ARM based board
https://www.twblogs.net/a/5b7c96622b71770a43dbb287
Optional: Use visualized tool ==> Visualize
And the result is something like the following picture:
bazel build --config opt //tensorflow/contrib/lite/tools:visualize
bazel-bin/tensorflow/contrib/lite/examples/label_image/label_image \
-m ~/git/tensorflow_danny/tensorflow/examples/label_image/data/inception_v3.tflite \
More information in build TensorFlow Lite on ARM based OS:
http://www.guodongkeji.com/4g-show-25-2774-1.html
No comments:
Post a Comment