Showing posts with label ONNX. Show all posts
Showing posts with label ONNX. Show all posts

Tuesday, August 21, 2018

[ONNX] Train in Tensorflow and export to ONNX (Part II)

If you read the previous post as the link below, you probably may ask a question: If the input TF graph for freezing is not a binary format, what do we do?
http://danny270degree.blogspot.com/2018/08/onnx-train-in-tensorflow-and-export-to.html

Let us recall the previous example below. The file "graph.proto" is the binary format of the protobuf file for TensorFlow graph generated from the following function:
  with open("graph.proto", "wb") as file:
    graph = tf.get_default_graph().as_graph_def(add_shapes=True)
    file.write(graph.SerializeToString())

Wednesday, August 8, 2018

[ONNX] Use ONNX_TF and nGraph_ONNX to do inference/prediction with ONNX model


Here I try to use the pre-trained model from ONNX model zoo, which the models are already converted from some deep learning framework. So I download the Resnet50 model from the following URL and untar it:

wget https://s3.amazonaws.com/download.onnx/models/opset_8/resnet50.tar.gz
tar -xzvf resnet50.tar.gz 
P.S: pre-trained ONNX models: https://github.com/onnx/models

Then, I can do the inference/prediction using this ONNX model in two ways:

[ONNX] Train in Tensorflow and export to ONNX (Part I)

From my point of view, ONNX is a model description spec and ONNX model needs Deep Learning framework or backend tool/compiler which supports it to run.
The advantage of ONNX as I know is about portable and exchangeable between DL frameworks.
Here I will use this tutorial to convert TensorFlow's model to ONNX model by myself.

https://github.com/onnx/tutorials/blob/master/tutorials/OnnxTensorflowExport.ipynb