When the first time to deal with deploying TVM module using C++ API, I found this official web site: Deploy TVM Module using C++ API which only gives a fine example for deploying, but it doesn't explain how to generate the related files and modules.
So, after several times of trial and error, I figure it out to generate the required files for deploying.
Basically you can refer to the TVM tutorials about compiling models:
https://docs.tvm.ai/tutorials/index.html
No matter which model you compile and which frontend parser you use ( NNVM or Relay), please find out the place of compiling the model and notify the outputs as follows:
├── deploy_graph.json
├── deploy_lib.so
└── deploy_param.params
Reference:
Deploy NNVM Modules
https://docs.tvm.ai/deploy/nnvm.html
Advanced example:
Tutorial: Deploy Face Recognition Model via TVM
https://github.com/deepinsight/insightface/wiki/Tutorial:-Deploy-Face-Recognition-Model-via-TVM
TVM inference C++ issue
https://discuss.tvm.ai/t/solved-c-inference-test-app-doesnt-work-correctly/984/8
So, after several times of trial and error, I figure it out to generate the required files for deploying.
Basically you can refer to the TVM tutorials about compiling models:
https://docs.tvm.ai/tutorials/index.html
No matter which model you compile and which frontend parser you use ( NNVM or Relay), please find out the place of compiling the model and notify the outputs as follows:
# targt x86 cpu
target = "llvm"
with relay.build_module.build_config(opt_level=3):
graph, lib, params = relay.build(func, target, params=params)
With these output variables: graph, lib, params, then we can generate the related shared library, the json file, and weight data file.lib.export_library("./deploy_lib.so")
print('lib export succeefully')
with open("./deploy_graph.json", "w") as fo:
fo.write(graph)
with open("./deploy_param.params", "wb") as fo:
fo.write(nnvm.compiler.save_param_dict(params))
Once, you have the following files, you can truly be able to deploy them to your C++ code.├── deploy_graph.json
├── deploy_lib.so
└── deploy_param.params
Reference:
Deploy NNVM Modules
https://docs.tvm.ai/deploy/nnvm.html
Advanced example:
Tutorial: Deploy Face Recognition Model via TVM
https://github.com/deepinsight/insightface/wiki/Tutorial:-Deploy-Face-Recognition-Model-via-TVM
TVM inference C++ issue
https://discuss.tvm.ai/t/solved-c-inference-test-app-doesnt-work-correctly/984/8
No comments:
Post a Comment