Monday, July 15, 2019

Wednesday, June 19, 2019

[DDS] Install OpenSplice Community Version

What is DDS? The Data Distribution Service (DDS™) is a middleware protocol and API standard for data-centric connectivity from the Object Management Group® (OMG®). It integrates the components of a system together, providing low-latency data connectivity, extreme reliability, and a scalable architecture that business and mission-critical Internet of Things (IoT) applications need. For more information, check it out: https://www.dds-foundation.org/what-is-dds-3/
https://zhuanlan.zhihu.com/p/32278571

In this post, I install OpenSplice as my DDS runtime environment and library. You can download it from here: https://www.adlinktech.com/en/dds-community-software-evaluation.aspx

Wednesday, June 12, 2019

[TensorFlow] Build TensorFlow from source with Intel MKL enabled

Based on this document "Intel® Math Kernel Library for Deep Learning Networks: Part 1–Overview and Installation",  I give a quick summary to do it.
Here are the steps for building TensorFlow from source with Intel MKL enabled.

For MKL DNN:
# Install Intel MKL & MKL-DNN
$ git clone https://github.com/01org/mkl-dnn.git
$ cd mkl-dnn
$ cd scripts && ./prepare_mkl.sh && cd ..
$ mkdir -p build && cd build && cmake .. && make -j$(nproc)
$ make test
$ sudo make install
For building TensorFlow:

Monday, June 10, 2019

[Squid] Setup Linux Proxy Server and Client

The following steps are about installing and setup Linux proxy server (Squid) 

#install squid proxy
$ sudo apt-get install squid

#modify squid.conf
$ sudo vi /etc/squid/squid.conf
  ==> #add your local network segment
  acl mynetwork src 192.168.0.0/24
  ==> #allow "mynetwork" be accessing via http
  http_access allow mynetwork

#restart squid...OK
$ sudo service squid restart

Tuesday, May 28, 2019

[Docker] Using GUI with Docker

Recently I need to run my GUI application with Docker and it can show up either directly on my desktop operating system or in my ssh terminal client via X11.

Basically, there are some people who already provide the solution for the cases. I just list the reference and quickly give my examples.

Monday, April 22, 2019

[TVM] Deploy TVM Module using C++ API

When the first time to deal with deploying TVM module using C++ API, I found this official web site: Deploy TVM Module using C++ API which only gives a fine example for deploying, but it doesn't explain how to generate the related files and modules.

So, after several times of trial and error, I figure it out to generate the required files for deploying.
Basically you can refer to the TVM tutorials about compiling models:
https://docs.tvm.ai/tutorials/index.html

Monday, April 15, 2019

[Experiment] Compare the inference performance of TensorFlow Lite and TVM

I compare the inference performance of both TensorFlow Lite and TVM on my laptop with the same MobileNet model and the same input size of 224*224.
They both assign two threads to do the inference task and see the average inferencing time it spent.
(P.S: Giving the same of 10 threads in these 2 cases )

Wednesday, April 10, 2019

[TensorFlow Lite] The performance experiment of TensorFlow Lite

Based on my previous post: [TensorFlow Lite] My first try with TensorFlow Lite ,
I am just curious about the performance of TensorFlow Lite on the different platforms so that I do a performance experiment for it.
There are a X86 server (no GPU enabled) and a Nvidia TX2 board ( which is ARM based ) to do the experiment. They both run TFLite’s official example using Inception_v3 model and get average inference time as the following table.

Hardware Information:
Nvidia TX2: 4 CPU cores and 1 GPU ( but does not use )
X86 server: 24 CPU cores and no GPU enabled