Thursday, June 7, 2018

[TX2 研究] My first try on Jetson TX2

I got a Jetson TX2 several days ago from my friend and it looks like following pictures. I setup it using Nivida's installing tool: JetPack-L4T-3.2 version (JetPack-L4T-3.2-linux-x64_b196.run). During the installation, I indeed encounter some issues with not abling to setup IP address on TX2, and I resolved it. If anyone still has this issue, let me know and I will post another article to explain the resolving steps. 






Basically, on TX2 there is no "nvidia-smi" this kind of command tool for you to check the GPU card's status, you need to use these as below:

1. Use deviceQuery to get hardware information
nvidia@tegra-ubuntu:~$ /usr/local/cuda-9.0/bin/cuda-install-samples-9.0.sh .
nvidia@tegra-ubuntu:~$ cd NVIDIA_CUDA-9.0_Samples/1_Utilities/deviceQuery
nvidia@tegra-ubuntu:~/NVIDIA_CUDA-9.0_Samples/1_Utilities/deviceQuery$ make
nvidia@tegra-ubuntu:~/NVIDIA_CUDA-9.0_Samples/1_Utilities/deviceQuery$ ./deviceQuery

2. Use tegrastats ( in user: nvidia's home directory ) to get hardware status.
nvidia@tegra-ubuntu:~$ ./tegrastats
RAM 5974/7854MB (lfb 31x4MB) CPU [0%@1113,0%@345,0%@345,0%@1113,0%@1113,0%@1113] BCPU@49.5C MCPU@49.5C GPU@51.5C PLL@49.5C Tboard@44C Tdiode@52C PMIC@100C thermal@50.5C VDD_IN 11324/11324 VDD_CPU 304/304 VDD_GPU 5642/5642 VDD_SOC 1295/1295 VDD_WIFI 0/0 VDD_DDR 2654/2654
RAM 5974/7854MB (lfb 31x4MB) CPU [14%@345,0%@345,0%@345,11%@345,10%@345,11%@345] BCPU@49.5C MCPU@49.5C GPU@51.5C PLL@49.5C Tboard@44C Tdiode@51.75C PMIC@100C thermal@50.3C VDD_IN 11214/11269 VDD_CPU 305/304 VDD_GPU 5566/5604 VDD_SOC 1295/1295 VDD_WIFI 0/0 VDD_DDR 2615/2634
RAM 5974/7854MB (lfb 31x4MB) CPU [16%@345,0%@345,0%@345,5%@345,7%@345,4%@345] BCPU@49.5C MCPU@49.5C GPU@52C PLL@49.5C Tboard@44C Tdiode@52.25C PMIC@100C thermal@51.3C VDD_IN 12239/11592 VDD_CPU 304/304 VDD_GPU 6250/5819 VDD_SOC 1371/1320 VDD_WIFI 0/0 VDD_DDR 2807/2692

If you want to install TensorFlow on TX2, right now this task is very easy because Nvidia has provided an automatic installation script here: 
nvidia@tegra-ubuntu:~$ git clone https://github.com/JasonAtNvidia/JetsonTFBuild
nvidia@tegra-ubuntu:~$ cd JetsonTFBuild && sudo ./BuildTensorflow.sh
Update: You can directly install TensorFlow via pre-built Python wheels:https://devtalk.nvidia.com/default/topic/1031300/jetson-tx2/tensorflow-1-8-wheel-with-jetpack-3-2-/

TF-1.11.0rc1 for JetPack3.3 is updated !!
Python 2.7:
https://nvidia.box.com/v/JP33-TF1-11-0-py27-wTRT

Python 3.5:
https://nvidia.box.com/v/JP33-TF1-11-0-py35-wTRT

-----------------------------------------------------
Our previous achieve for JetPack3.2:
Python 2.7:
r1.10.1: https://nvidia.box.com/v/TF1101-Py27-wTRT
r1.10 : https://nvidia.app.box.com/v/TF1100-Py27-wTRT
r1.9 : https://nvidia.box.com/v/TF190rc0-py27-wTRT
r1.8 : https://nvidia.box.com/v/TF180-Py27-wTRT
r1.7 : https://nvidia.box.com/v/TF170-py27-wTRT

Python 3.5:
r1.10.1: https://nvidia.box.com/v/TF1101-Py35-wTRT
r1.10 : https://nvidia.app.box.com/v/TF1100-Py35-wTRT
r1.9 : https://nvidia.box.com/v/TF190rc0-py35-wTRT
r1.8 : https://nvidia.box.com/v/TF180-Py35-wTRT

r1.7 : https://nvidia.box.com/v/TF170-py35-wTRT


I also did an experiment to compare the training job speed with a normal server with one Nvidia GTX 1080 Ti card inside. The training script is in my github:
https://github.com/teyenliu/pyutillib/blob/master/mnist_gpu_tx2.py

In my training experiment, the condition of a simple CNN model and the batch size are the same, and the result is that GTX 1080 Ti is 
11 times faster than TX2. (It makes sense because TX2 is born to do edge's inference jobs)

The following picture is about TX2 runs the training job. The fan only be turned on when its temperature is higher than around 50C degree. 


P.S:
How to setpu WIFI for TX2
To fix the freshing install of the new Jetpack with errors
https://devtalk.nvidia.com/default/topic/1030999/jetson-tx2/fresh-install-of-the-new-jetpack-with-errors/
  1. sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys F60F4B3D7FA2AF80

No comments: