This document is to record how to install Spark environment based on Hadoop as the previous one. For running Spark in Ubuntu machine, it should install Java first. Using the following command is easily to install Java in Ubuntu machine.
$ sudo apt-get install openjdk-7-jre openjdk-7-jdk
$ dpkg -L openjdk-7-jdk | grep '/bin/javac'
$ /usr/lib/jvm/java-7-openjdk-amd64/bin/javac
So, we can setup the JAVA_HOME environment variable as follows:
$ vim /etc/profile
append this ==> export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
$ sudo tar -zxf ~/Downloads/spark-1.6.0-bin-without-hadoop.tgz -C /usr/local/
$ cd /usr/local
$ sudo mv ./spark-1.6.0-bin-without-hadoop/ ./spark
$ sudo chown -R hadoop:hadoop ./spark
$ sudo apt-get update
$ sudo apt-get install scala
$ wget http://apache.stu.edu.tw/spark/spark-1.6.0/spark-1.6.0-bin-hadoop2.6.tgz
$ tar xvf spark-1.6.0-bin-hadoop2.6.tgz
$ cd /spark-1.6.0-bin-hadoop2.6/bin
$ ./spark-shell
$ cd /usr/local/spark
$ cp ./conf/spark-env.sh.template ./conf/spark-env.sh
$ vim ./conf/spark-env.sh
append this ==> export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)
Monday, May 15, 2017
[picamera] Solving the problem of video display using Raspberry Pi Camera
When I tried to use Raspberry Pi Camera to display video or image, I encountered a problem that there is no image frame and the GUI showed a black frame on the screen. It took me a while to figure out this issue.
After searching the similar error on the Internet, I found it is related with using picamera library v1.11 and Python 2.7. So I try downgrading to picamera v1.10 and this should resolve the blank/black frame issue:
The linux command is as follows:
$ sudo pip uninstall picamera
$ sudo pip install 'picamera[array]'==1.10
So, it seems there are some issues with the most recent version of picamera that are causing a bunch of problems for Python 2.7 and Python 3 users.
After searching the similar error on the Internet, I found it is related with using picamera library v1.11 and Python 2.7. So I try downgrading to picamera v1.10 and this should resolve the blank/black frame issue:
The linux command is as follows:
$ sudo pip uninstall picamera
$ sudo pip install 'picamera[array]'==1.10
So, it seems there are some issues with the most recent version of picamera that are causing a bunch of problems for Python 2.7 and Python 3 users.
[Kafka] Install and setup Kafka
Kafka is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.
Install and setup Kafka
$ sudo useradd kafka -m
$ sudo passwd kafka
$ sudo adduser kafka sudo
$ su - kafka
$ sudo apt-get install zookeeperd
To make sure that it is working, connect to it via Telnet:
$telnet localhost 2181
$ mkdir -p ~/Downloads
$ wget "http://mirror.cc.columbia.edu/pub/software/apache/kafka/0.8.2.1/kafka_2.11-0.8.2.1.tgz" -O ~/Downloads/kafka.tgz
$ mkdir -p ~/kafka && cd ~/kafka
$ tar -xvzf ~/Downloads/kafka.tgz --strip 1
$ vi ~/kafka/config/server.properties
By default, Kafka doesn't allow you to delete topics. To be able to delete topics, add the following line at the end of the file:
⇒ delete.topic.enable = true
Start Kafka
$ nohup ~/kafka/bin/kafka-server-start.sh ~/kafka/config/server.properties > ~/kafka/kafka.log 2>&1 &
Publish the string "Hello, World" to a topic called TutorialTopic by typing in the following:
$ echo "Hello, World" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TutorialTopic
$ ~/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic TutorialTopic --from-beginning
Install and setup Kafka
$ sudo useradd kafka -m
$ sudo passwd kafka
$ sudo adduser kafka sudo
$ su - kafka
$ sudo apt-get install zookeeperd
To make sure that it is working, connect to it via Telnet:
$telnet localhost 2181
$ mkdir -p ~/Downloads
$ wget "http://mirror.cc.columbia.edu/pub/software/apache/kafka/0.8.2.1/kafka_2.11-0.8.2.1.tgz" -O ~/Downloads/kafka.tgz
$ mkdir -p ~/kafka && cd ~/kafka
$ tar -xvzf ~/Downloads/kafka.tgz --strip 1
$ vi ~/kafka/config/server.properties
By default, Kafka doesn't allow you to delete topics. To be able to delete topics, add the following line at the end of the file:
⇒ delete.topic.enable = true
Start Kafka
$ nohup ~/kafka/bin/kafka-server-start.sh ~/kafka/config/server.properties > ~/kafka/kafka.log 2>&1 &
Publish the string "Hello, World" to a topic called TutorialTopic by typing in the following:
$ echo "Hello, World" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TutorialTopic
$ ~/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic TutorialTopic --from-beginning
[InfluxDB] Install and setup InfluxDB
Download the source and install
$ wget https://s3.amazonaws.com/influxdb/influxdb_0.12.1-1_amd64.deb$ sudo dpkg -i influxdb_0.12.1-1_amd64.deb
Edit influxdb.conf file
$ vim /etc/influxdb/influxdb.confRestart influxDB
$ sudo service influxdb restartinfluxdb process was stopped [ OK ]
Starting the process influxdb [ OK ]
influxdb process was started [ OK ]
$ sudo netstat -naptu | grep LISTEN | grep influxd
tcp6 0 0 :::8083 :::* LISTEN 3558/influxd
tcp6 0 0 :::8086 :::* LISTEN 3558/influxd
tcp6 0 0 :::8088 :::* LISTEN 3558/influxd
Client command tool
$influx> show databases
Tuesday, May 9, 2017
[OpenGL] Draw 3D and Texture with BMP image using OpenGL Part I
It has been more than half of year not posting any article in my blogger and that makes me a little bit embarrassed. Well, for breaking this situation, I just quickly explain a simple concept about OpenGL coordinate.
Before taking an adventure to OpenGL, we have to know the coordinate in OpenGL first. Please check out the following graph. As we can see, the perspective of z position is pointed to us and it's so different from OpenCV.
If we take a look closer, the following OpenGL code can be explained in the picture below:
glBegin(GL_QUADS) # Start Drawing The Cube
# Front Face (note that the texture's corners have to match the quad's corners)
glTexCoord2f(1.0, 0.0); glVertex3f(-1.0, -1.0, 1.0) # Bottom Left Of The Texture and Quad
glTexCoord2f(1.0, 1.0); glVertex3f( 1.0, -1.0, 1.0) # Bottom Right Of The Texture and Quad
glTexCoord2f(0.0, 1.0); glVertex3f( 1.0, 1.0, 1.0) # Top Right Of The Texture and Quad
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0, 1.0, 1.0) # Top Left Of The Texture and Quad
# Back Face
glTexCoord2f(1.0, 0.0); glVertex3f(-1.0, -1.0, -1.0) # Bottom Right Of The Texture and Quad
glTexCoord2f(1.0, 1.0); glVertex3f(-1.0, 1.0, -1.0) # Top Right Of The Texture and Quad
glTexCoord2f(0.0, 1.0); glVertex3f( 1.0, 1.0, -1.0) # Top Left Of The Texture and Quad
glTexCoord2f(0.0, 0.0); glVertex3f( 1.0, -1.0, -1.0) # Bottom Left Of The Texture and Quad
# Top Face
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0, 1.0, -1.0) # Top Left Of The Texture and Quad
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0, 1.0, 1.0) # Bottom Left Of The Texture and Quad
glTexCoord2f(1.0, 0.0); glVertex3f( 1.0, 1.0, 1.0) # Bottom Right Of The Texture and Quad
glTexCoord2f(1.0, 1.0); glVertex3f( 1.0, 1.0, -1.0) # Top Right Of The Texture and Quad
# Bottom Face
glTexCoord2f(1.0, 1.0); glVertex3f(-1.0, -1.0, -1.0) # Top Right Of The Texture and Quad
glTexCoord2f(0.0, 1.0); glVertex3f( 1.0, -1.0, -1.0) # Top Left Of The Texture and Quad
glTexCoord2f(0.0, 0.0); glVertex3f( 1.0, -1.0, 1.0) # Bottom Left Of The Texture and Quad
glTexCoord2f(1.0, 0.0); glVertex3f(-1.0, -1.0, 1.0) # Bottom Right Of The Texture and Quad
# Right face
glTexCoord2f(1.0, 0.0); glVertex3f( 1.0, -1.0, -1.0) # Bottom Right Of The Texture and Quad
glTexCoord2f(1.0, 1.0); glVertex3f( 1.0, 1.0, -1.0) # Top Right Of The Texture and Quad
glTexCoord2f(0.0, 1.0); glVertex3f( 1.0, 1.0, 1.0) # Top Left Of The Texture and Quad
glTexCoord2f(0.0, 0.0); glVertex3f( 1.0, -1.0, 1.0) # Bottom Left Of The Texture and Quad
# Left Face
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0, -1.0, -1.0) # Bottom Left Of The Texture and Quad
glTexCoord2f(1.0, 0.0); glVertex3f(-1.0, -1.0, 1.0) # Bottom Right Of The Texture and Quad
glTexCoord2f(1.0, 1.0); glVertex3f(-1.0, 1.0, 1.0) # Top Right Of The Texture and Quad
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0, 1.0, -1.0) # Top Left Of The Texture and Quad
glEnd(); # Done Drawing The Cube
So, right now, if we just look at the first section of the code as follows, it represents the blue quadrilateral for the front face.
And, the texture coordinate represents the direction of the image.
In sum, we can see the result just like this:
Before taking an adventure to OpenGL, we have to know the coordinate in OpenGL first. Please check out the following graph. As we can see, the perspective of z position is pointed to us and it's so different from OpenCV.
If we take a look closer, the following OpenGL code can be explained in the picture below:
glBegin(GL_QUADS) # Start Drawing The Cube
# Front Face (note that the texture's corners have to match the quad's corners)
glTexCoord2f(1.0, 0.0); glVertex3f(-1.0, -1.0, 1.0) # Bottom Left Of The Texture and Quad
glTexCoord2f(1.0, 1.0); glVertex3f( 1.0, -1.0, 1.0) # Bottom Right Of The Texture and Quad
glTexCoord2f(0.0, 1.0); glVertex3f( 1.0, 1.0, 1.0) # Top Right Of The Texture and Quad
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0, 1.0, 1.0) # Top Left Of The Texture and Quad
# Back Face
glTexCoord2f(1.0, 0.0); glVertex3f(-1.0, -1.0, -1.0) # Bottom Right Of The Texture and Quad
glTexCoord2f(1.0, 1.0); glVertex3f(-1.0, 1.0, -1.0) # Top Right Of The Texture and Quad
glTexCoord2f(0.0, 1.0); glVertex3f( 1.0, 1.0, -1.0) # Top Left Of The Texture and Quad
glTexCoord2f(0.0, 0.0); glVertex3f( 1.0, -1.0, -1.0) # Bottom Left Of The Texture and Quad
# Top Face
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0, 1.0, -1.0) # Top Left Of The Texture and Quad
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0, 1.0, 1.0) # Bottom Left Of The Texture and Quad
glTexCoord2f(1.0, 0.0); glVertex3f( 1.0, 1.0, 1.0) # Bottom Right Of The Texture and Quad
glTexCoord2f(1.0, 1.0); glVertex3f( 1.0, 1.0, -1.0) # Top Right Of The Texture and Quad
# Bottom Face
glTexCoord2f(1.0, 1.0); glVertex3f(-1.0, -1.0, -1.0) # Top Right Of The Texture and Quad
glTexCoord2f(0.0, 1.0); glVertex3f( 1.0, -1.0, -1.0) # Top Left Of The Texture and Quad
glTexCoord2f(0.0, 0.0); glVertex3f( 1.0, -1.0, 1.0) # Bottom Left Of The Texture and Quad
glTexCoord2f(1.0, 0.0); glVertex3f(-1.0, -1.0, 1.0) # Bottom Right Of The Texture and Quad
# Right face
glTexCoord2f(1.0, 0.0); glVertex3f( 1.0, -1.0, -1.0) # Bottom Right Of The Texture and Quad
glTexCoord2f(1.0, 1.0); glVertex3f( 1.0, 1.0, -1.0) # Top Right Of The Texture and Quad
glTexCoord2f(0.0, 1.0); glVertex3f( 1.0, 1.0, 1.0) # Top Left Of The Texture and Quad
glTexCoord2f(0.0, 0.0); glVertex3f( 1.0, -1.0, 1.0) # Bottom Left Of The Texture and Quad
# Left Face
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0, -1.0, -1.0) # Bottom Left Of The Texture and Quad
glTexCoord2f(1.0, 0.0); glVertex3f(-1.0, -1.0, 1.0) # Bottom Right Of The Texture and Quad
glTexCoord2f(1.0, 1.0); glVertex3f(-1.0, 1.0, 1.0) # Top Right Of The Texture and Quad
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0, 1.0, -1.0) # Top Left Of The Texture and Quad
glEnd(); # Done Drawing The Cube
So, right now, if we just look at the first section of the code as follows, it represents the blue quadrilateral for the front face.
And, the texture coordinate represents the direction of the image.
In sum, we can see the result just like this:
Tuesday, September 13, 2016
[Haar Classifier] Train your own OpenCV Haar classifier
I just keep a record for myself because there are a lot of documents teaching how to train your haar classifier and almost of them seem to don't work well. The following 2 items are clear and easy to understand.
The Data Image Source (cars) I use.
http://cogcomp.cs.illinois.edu/Data/Car/
1. Train your own OpenCV Haar classifier
https://github.com/mrnugget/opencv-haar-classifier-training
find ./positive_images -iname "*.pgm" > positives.txt
find ./negative_images -iname "*.pgm" > negatives.txt
perl bin/createsamples.pl positives.txt negatives.txt samples 550\
"opencv_createsamples -bgcolor 0 -bgthresh 0 -maxxangle 1.1\
-maxyangle 1.1 maxzangle 0.5 -maxidev 40 -w 48 -h 24"
python ./tools/mergevec.py -v samples/ -o samples.vec
opencv_traincascade -data classifier -vec samples.vec -bg negatives.txt\
-numStages 10 -minHitRate 0.999 -maxFalseAlarmRate 0.5 -numPos 1000\
-numNeg 600 -w 48 -h 24 -mode ALL -precalcValBufSize 1024\
-precalcIdxBufSize 1024
2. OpenCV Tutorial: Training your own detector | packtpub.com
https://www.youtube.com/watch?v=WEzm7L5zoZE
find pos/ -name '*.pgm' -exec echo \{\} 1 0 0 100 40 \; > cars.info
find neg/ -name '*.pgm' > bg.txt
opencv_createsamples -info cars.info -num 550 -w 48 -h 24 -vec cars.vec
opencv_createsamples -w 48 -h 24 -vec cars.vec
opencv_traincascade -data data -vec cars.vec -bg bg.txt \
-numPos 500 -numNeg 500 -numStages 10 -w 48 -h 24 -featureType LBP
P.S: Which one is best? I don't know...
The Data Image Source (cars) I use.
http://cogcomp.cs.illinois.edu/Data/Car/
1. Train your own OpenCV Haar classifier
https://github.com/mrnugget/opencv-haar-classifier-training
find ./positive_images -iname "*.pgm" > positives.txt
find ./negative_images -iname "*.pgm" > negatives.txt
perl bin/createsamples.pl positives.txt negatives.txt samples 550\
"opencv_createsamples -bgcolor 0 -bgthresh 0 -maxxangle 1.1\
-maxyangle 1.1 maxzangle 0.5 -maxidev 40 -w 48 -h 24"
python ./tools/mergevec.py -v samples/ -o samples.vec
opencv_traincascade -data classifier -vec samples.vec -bg negatives.txt\
-numStages 10 -minHitRate 0.999 -maxFalseAlarmRate 0.5 -numPos 1000\
-numNeg 600 -w 48 -h 24 -mode ALL -precalcValBufSize 1024\
-precalcIdxBufSize 1024
2. OpenCV Tutorial: Training your own detector | packtpub.com
https://www.youtube.com/watch?v=WEzm7L5zoZE
find pos/ -name '*.pgm' -exec echo \{\} 1 0 0 100 40 \; > cars.info
find neg/ -name '*.pgm' > bg.txt
opencv_createsamples -info cars.info -num 550 -w 48 -h 24 -vec cars.vec
opencv_createsamples -w 48 -h 24 -vec cars.vec
opencv_traincascade -data data -vec cars.vec -bg bg.txt \
-numPos 500 -numNeg 500 -numStages 10 -w 48 -h 24 -featureType LBP
P.S: Which one is best? I don't know...
[Image] How to resize, convert & modify images from the Linux
Installation
$ sudo apt-get install imagemagick
Converting Between Formats
$ convert howtogeek.png howtogeek.jpg
You can also specify a compression level for JPEG images:
$ convert howtogeek.png -quality 95 howtogeek.jpg
Resizing Images
$ convert example.png -resize 200×100 example.png
- to force the image to become a specific size – even if it messes up the aspect ratio
$ convert example.png -resize 200×100! example.png
$ convert example.png -resize 200 example.png
$ convert example.png -resize x100 example.png
Rotating an Image
convert howtogeek.jpg -rotate 90 howtogeek-rotated.jpg
Applying Effects
ImageMagick can apply a variety of effects to an image.
- For example, the following command applies the “charcoal” effect to an image:
$ convert howtogeek.jpg -charcoal 2 howtogeek-charcoal.jpg
- the “Implode” effect with a strength of 1:
# convert howtogeek.jpg -implode 1 howtogeek-imploded.jpg
Batch Processing
for file in *.png; do convert $file -rotate 90 rotated-$file; done
Reference:
http://www.howtogeek.com/109369/how-to-quickly-resize-convert-modify-images-from-the-linux-terminal/
$ sudo apt-get install imagemagick
Converting Between Formats
$ convert howtogeek.png howtogeek.jpg
You can also specify a compression level for JPEG images:
$ convert howtogeek.png -quality 95 howtogeek.jpg
Resizing Images
$ convert example.png -resize 200×100 example.png
- to force the image to become a specific size – even if it messes up the aspect ratio
$ convert example.png -resize 200×100! example.png
$ convert example.png -resize 200 example.png
$ convert example.png -resize x100 example.png
Rotating an Image
convert howtogeek.jpg -rotate 90 howtogeek-rotated.jpg
Applying Effects
ImageMagick can apply a variety of effects to an image.
- For example, the following command applies the “charcoal” effect to an image:
$ convert howtogeek.jpg -charcoal 2 howtogeek-charcoal.jpg
- the “Implode” effect with a strength of 1:
# convert howtogeek.jpg -implode 1 howtogeek-imploded.jpg
Batch Processing
for file in *.png; do convert $file -rotate 90 rotated-$file; done
Reference:
http://www.howtogeek.com/109369/how-to-quickly-resize-convert-modify-images-from-the-linux-terminal/
Thursday, September 8, 2016
[TensorFlow] My case to install TensorFlow with GPU enabled
My Operation System is Ubuntu 14.04 LTS 5 and GPU card is GeForce GTX 750Ti
1. Go to nvidia.com and download the driver (NVIDIA-Linux-x86_64-367.44.sh)
1. Go to nvidia.com and download the driver (NVIDIA-Linux-x86_64-367.44.sh)
2. For Nvidia to find linux header files (*):
$ sudo apt-get install build-essential linux-headers-$(uname -r)
3. To enable full screen text mode (nomodeset):
$ sudo gedit /etc/default/grub
>> Edit GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset"
Save it and reboot
$ sudo update-grub
$ sudo reboot
4. Log into with Ctl +Alt + F1
5. Stop the X Server service
$ sudo service lightdm stop
6. Install nVidia driver
$ sudo ./NVIDIA-Linux-x86_64-367.44.sh
$ sudo apt-get install build-essential linux-headers-$(uname -r)
3. To enable full screen text mode (nomodeset):
$ sudo gedit /etc/default/grub
>> Edit GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset"
Save it and reboot
$ sudo update-grub
$ sudo reboot
4. Log into with Ctl +Alt + F1
5. Stop the X Server service
$ sudo service lightdm stop
6. Install nVidia driver
$ sudo ./NVIDIA-Linux-x86_64-367.44.sh
7. Install CUDA (GPUs on Linux)
Download and install Cuda Toolkit
sudo dpkg -i cuda-repo-ubuntu1404-8-0-local_8.0.44-1_amd64.deb
sudo apt-get update
sudo apt-get install cuda
8. Download and install cuDNN
tar xvzf cudnn-8.0-linux-x64-v5.1.tgz
cd cudasudo cp include/cudnn.h /usr/local/cuda-8.0/include
sudo cp lib64/* /usr/local/cuda-8.0/lib64
sudo chmod a+r /usr/local/cuda-8.0/lib64/libcudnn*
9. You also need to set the LD_LIBRARY_PATH and CUDA_HOME environment variables. Consider adding the commands below to your ~/.bash_profile. These assume your CUDA installation is in /usr/local/cuda:
$ vim ~/.bashrc
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64"export CUDA_HOME=/usr/local/cuda-8.0
export PATH="$CUDA_HOME/bin:$PATH"
export PATH="$PATH:$HOME/bin"
10. To install TensorFlow for Ubuntu/Linux 64-bit, GPU enabled:
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.10.1-cp27-none-linux_x86_64.whl
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.10.1-cp27-none-linux_x86_64.whl
To find out which device is used, you can enable log device placement like this:
$ python
>>>> import tensorflow as tf
>>>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
$ python
>>>> import tensorflow as tf
>>>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
Subscribe to:
Posts (Atom)