Wednesday, February 27, 2013

[sFlow] Use sflowtool to parse sFlow datagram

In order to test and understand sFlow more in details, I prepare the following environment below. Switch 1 and 2 are emulated using Open vSwitch. In the previous sFlow article, there is a sFlow setting on Open vSwitch. Please check it out.

            192.168.12.201           10.3.207.244           192.168.12.202
            +------------+        +-----------------+       +------------+
            |  PC1       |        | sFlow Collector |       |  PC2       |
            |            |        |                 |       |            |
            |            |        |                 |       |            |
            +-----+------+        +---^--------^----+       +-----+------+
                  |                   |        |                  |
                  |                   |        |                  |
                  |                   ^ sFlow  ^                  |
                  +---------+         | Data   |       +----------+
                            |         ^        ^       |
                            |         |        |       |
                            |         ^        ^       |
                            |         |        |       |
                   +--------+---------++      ++-------++---------+
                   |    Switch 1       |      |   Switch 2        |
                   | 10.3.207.142      +------+ 10.3.207.143      |
                   +-------------------+      +-------------------+

sFlowtool is an open source toolkit for us to leverage the functionality of parsing sFlow datagram. In my environment, when I ping PC2 from PC1, then Switch 1 and 2 will send sFlow Counter and Flow Sample datagram to the sFlow collector. Now, I will use sFlowtool to parse  the information that it gets.

> ./sflowtool
Counter Sample:startDatagram =================================
datagramSourceIP 0.0.0.0
datagramSize 144
unixSecondsUTC 1359534209
datagramVersion 5
agentSubId 0
agent 10.3.207.142
packetSequenceNo 403
sysUpTime 947000
samplesInPacket 1
startSample ----------------------
sampleType_tag 0:2
sampleType COUNTERSSAMPLE
sampleSequenceNo 95
sourceId 0:4
counterBlock_tag 0:1
ifIndex 4
networkType 6
ifSpeed 1000000000
ifDirection 1
ifStatus 3
ifInOctets 126361
ifInUcastPkts 1072
ifInMulticastPkts 0
ifInBroadcastPkts 4294967295
ifInDiscards 0
ifInErrors 0
ifInUnknownProtos 4294967295
ifOutOctets 137350
ifOutUcastPkts 1135
ifOutMulticastPkts 4294967295
ifOutBroadcastPkts 4294967295
ifOutDiscards 0
ifOutErrors 0
ifPromiscuousMode 0
endSample   ----------------------
endDatagram   =================================
startDatagram =================================
datagramSourceIP 0.0.0.0
datagramSize 144
unixSecondsUTC 1359534210
datagramVersion 5
agentSubId 0
agent 10.3.207.244
packetSequenceNo 404
sysUpTime 948000
samplesInPacket 1
startSample ----------------------
sampleType_tag 0:2
sampleType COUNTERSSAMPLE
sampleSequenceNo 95
sourceId 0:5
counterBlock_tag 0:1
ifIndex 5
networkType 6
ifSpeed 100000000
ifDirection 2
ifStatus 1
ifInOctets 0
ifInUcastPkts 0
ifInMulticastPkts 0
ifInBroadcastPkts 4294967295
ifInDiscards 0
ifInErrors 0
ifInUnknownProtos 4294967295
ifOutOctets 0
ifOutUcastPkts 0
ifOutMulticastPkts 4294967295
ifOutBroadcastPkts 4294967295
ifOutDiscards 0
ifOutErrors 0
ifPromiscuousMode 0
endSample   ----------------------
endDatagram   =================================

Flow Sample:startDatagram =================================
datagramSourceIP 10.3.207.244
datagramSize 216
unixSecondsUTC 1359597631
datagramVersion 5
agentSubId 0
agent 10.3.207.142
packetSequenceNo 941
sysUpTime 2174000
samplesInPacket 1
startSample ----------------------
sampleType_tag 0:1
sampleType FLOWSAMPLE
sampleSequenceNo 72
sourceId 0:4
meanSkipCount 64
samplePool 4010
dropEvents 0
inputPort 4
outputPort 3
flowBlock_tag 0:1001
extendedType SWITCH
in_vlan 0
in_priority 0
out_vlan 0
out_priority 0
flowBlock_tag 0:1
flowSampleType HEADER
headerProtocol 1
sampledPacketSize 102
strippedBytes 4
headerLen 98
headerBytes 00-AB-77-E3-4B-00-00-AB-71-13-2D-00-08-00-45-00-00-54-00-00-40-00-40-01-9F-C5-C0-A8-0C-CA-C0-A8-0C-C9-08-00-C4-0E-55-06-08-57-28-96-AD-FD-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
dstMAC 00ab77e34b00
srcMAC 00ab71132d00
IPSize 84
ip.tot_len 84
srcIP 192.168.12.202
dstIP 192.168.12.201
IPProtocol 1
IPTOS 0
IPTTL 64
ICMPType 8
ICMPCode 0
endSample   ----------------------
endDatagram   =================================

If you use this, then you will get another format of data:
> ./sflowtool -l
Counter Sample:
CNTR,10.3.207.142,3,6,1000000000,1,3,112616,955,1,4294967295,0,0,4294967295,123043,1012,4294967295,4294967295,0,0,0

Flow Sample:
FLOW,10.3.207.142,4,3,00ab71132d00,00ab77e34b00,0x0800,0,0,192.168.12.202,192.168.12.201,1,0x00,64,0,0,0x00,102,84,64


Monday, February 25, 2013

[C++ STL] Containers

We all know that using data structure well is very important in programming, because it affects the performance, data accuracy, maintenance no matter what kind of the program you write. In C language, we have to provide our data structure by ourselves or by searching for related library and grab it to use. But, if possible ( I mean if your program is able to use g++ compiler and the environment ), you can consider to just use Containers (C++ STL) in your program. It is useful and powerful. For more information, here is the official web site : http://www.cplusplus.com/reference/stl/

Container class templates

Sequence containers:

Container adaptors:

Associative containers:



Friday, January 25, 2013

[Library] The useful libraries for C

Needless to say, C is powerful. But, if you use Java or Python, you will reconize that C lakes a bunch of Libraries(APIs) or a framework for C programmer to do job quicker. Sometimes you have to look for some C libraries to meet your requirement, and then you can avoid from carving the same wheels again and again. This document will record the useful libraries for C language and I will continue to add the new one on it. For those who are a great C programmer, if you know a good library for C, please also let me know that. Thanks in advance.

OGDF - Open Graph Drawing Framework
http://www.ogdf.net/ogdf.php

Curl Lib
the multiprotocol file transfer library
http://curl.haxx.se/libcurl/

mongoose
The lightweight web server in C
http://code.google.com/p/mongoose/

SimCList – A C library for Lists
http://mij.oltrelinux.com/devel/simclist/

JSON Library
http://www.digip.org/jansson/

The libevent API provides a mechanism to execute a callback function when a specific event occurs on a file descriptor or after a timeout has been reached. Furthermore, libevent also support callbacks due to signals or regular timeouts
http://libevent.org/

libev - a high performance full-featured event loop written in C
It is similar with libevent, but is more efficiently
http://doc.dvgu.ru/devel/ev.html

The Better String Library
http://bstring.sourceforge.net/

Unit Test Frameworks
https://github.com/imb/fctx

Exception Handling for C
http://code.google.com/p/exceptions4c/

SSL Library
https://polarssl.org/ssl-library

libssh2 is a client-side C library implementing the SSH2 protocol
http://www.libssh2.org/

MD5
http://256.com/sources/md5/

CIDR Library ( Need to verify )
http://www.over-yonder.net/~fullermd/projects/libcidr 

NETCONF library in C
https://code.google.com/p/libnetconf/

SQLite is a software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine.
http://www.sqlite.org/

Monday, January 21, 2013

[SDN] The news about Juniper to build its own software-defined networking stack

Software Defined Networking is becoming hotter and hotter topic from last year. Properly networking guys will talk about Software-defined Networking (SDN) everywhere this year. The event that VMware bought Nicira became a catalyst for some related Network Vendors to think about what the strategy and tactic they should have and deal with this wave of SDN. It let me think of this song ( Blondie – The Tide Is High )
  The tide is high but I'm holding on ...
   I'm gonna be your number one ...

   ...

In sum, SDN is important and key to survive this time in the networking world. Here is an example about that Juniper takes an action to provide their SDN approach. I excerpt some paragraph as follows:


Quota from the news:
http://www.theregister.co.uk/2013/01/16/juniper_sdn_strategy/

Juniper's approach to SDN, explained Muglia, will break the monolithic software stack inside of switches and routers – for campuses, branches, and service providers and not just for data centers – into four different planes: management, services, control, and forwarding.

This software will run on a virtualization layer called JunosV App Engine – which sounds a lot like a KVM hypervisor container for an x86 server, but Juniper did not say. This virtualization software will ship later in the first quarter of this year.

Central to that SDN stack is Contrail, a startup that was just getting ready to uncloak last month with its SDN wares when Juniper swept in with $176m in cash and snapped it up. Contrail was founded by a team of networking and software platform experts from Google, Juniper, Cisco, and Aruba Networks, and significantly had former Juniper CTO and chief architect Kireeti Kompella as its CTO. Now Kompella is back at Juniper, and is central to its SDN strategy.

Tuesday, January 8, 2013

[Signal] The examples of using alarm() and sigaction()

In the following URL, it provides a good explanation about signal and signal handling.
https://www.sharcnet.ca/help/index.php/Signal_Handling_and_Checkpointing

Handling signal is a important job when we want to write a great code.
There are several signal functions that we could use often in programming. I will give some examples of how to use alarm() and sigaction() functions as below.
P.S: the URL link is the orginal source code which is from.


The follwing two examples are about how to use alarm() and ualarm().
http://jyhshin.pixnet.net/blog/post/27749178-linux-%E8%A8%88%E6%99%82%E5%99%A8-alarm-signal-%281%29

/* Example of using alarm() */ #include <unistd.h> #include <stdio.h> #include <signal.h> int main(int argc, char *argv[]) { sigset_t block; sigemptyset(&block); sigaddset(&block, SIGALRM); sigprocmask(SIG_BLOCK, &block, NULL); while (1) { alarm(2); printf("%d\n", time(NULL)); sigwaitinfo(&block, NULL); } return 0; }


/* Example of using ualarm() */ #define _XOPEN_SOURCE 500 #include <unistd.h> #include <stdio.h> #include <signal.h> int main(int argc, char *argv[]) { sigset_t block; sigemptyset(&block); sigaddset(&block, SIGALRM); sigprocmask(SIG_BLOCK, &block, NULL); ualarm(500000, 500000); while (1) { printf("%d\n", time(NULL)); sigwaitinfo(&block, NULL); } return 0; }


Actually, sigaction() is widely used in handling signal, for instance, SIGINT (interrupt), SIGCHLD, and so on. Here are the examples to show the usage of sigaction with SIGINT.
http://www.linuxprogrammingblog.com/code-examples/sigaction

/* Example of using sigaction() to setup a signal handler with 3 arguments * including siginfo_t. */ #include <stdio.h> #include <unistd.h> #include <signal.h> #include <string.h> static void hdl (int sig, siginfo_t *siginfo, void *context) { printf ("Sending PID: %ld, UID: %ld\n", (long)siginfo->si_pid, (long)siginfo->si_uid); } int main (int argc, char *argv[]) { struct sigaction act; memset (&act, '\0', sizeof(act)); /* Use the sa_sigaction field because the handles has two additional parameters */ act.sa_sigaction = &hdl; /* The SA_SIGINFO flag tells sigaction() to use the sa_sigaction field, not sa_handler. */ act.sa_flags = SA_SIGINFO; if (sigaction(SIGINT, &act, NULL) < 0) { perror ("sigaction error"); return 1; } while (1) sleep (10); return 0; }

http://fanqiang.chinaunix.net/a4/b2/20010508/113528_b.html

/* Example of using sigaction() to setup a signal handler * with setting sa_handler */ #include <stdio.h> #include <unistd.h> #include <signal.h> #include <string.h> #include <errno.h> #define PROMPT "Do you want to terminate the process?" char *prompt=PROMPT; void ctrl_c_oper(int signo) { write(STDERR_FILENO,prompt,strlen(prompt)); } int main() { struct sigaction act; act.sa_handler=ctrl_c_oper; sigemptyset(&act.sa_mask); act.sa_flags=0; if(sigaction(SIGINT,&act,NULL)<0) { fprintf(stderr,"Install Signal Action Error:%s\n\a",strerror(errno)); exit(1); } while(1); }

Friday, December 28, 2012

[ZeroMQ] The example of ZeroMQ via C#

Recently I found these two articles that give a very good introduction of ZeroMQ and an useful use case for synchronized PUB-SUB pattern. Although it uses C# as its programming language instead of C, but I still think we can learn some important concept from these.And also, these articles provide very good images to illustrate the communication patterns.

http://www.codeproject.com/Articles/488207/ZeroMQ-via-Csharp-Introduction
  • There are several communication patterns described

http://www.codeproject.com/Articles/514959/ZeroMQ-via-Csharp-Multi-part-messages-JSON-and-Syn
  • Multi-part messages
  • Synchronized Pub-Sub pattern using PUB-SUB + REQ-REP





Thursday, December 13, 2012

[SWIG] How to add C function in Python by using SWIG

Here is a simple example to use SWIG to automatically wrap C function and generate a wrapper and build a shared library for python.

/*** File : example.c ***/
#include 
<time.h>
double My_variable = 3.0;
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
int my_mod(int x, int y) {
return (x%y);
}
char *get_time()
{
time_t ltime;
time(&ltime);
return ctime(&ltime);
}
/*************************/


/*** example.i ***/
%module example
%{
/* Put header files here or function declarations like below */
extern double My_variable;
extern int fact(int n);
extern int my_mod(int x, int y);
extern char *get_time();
%}
extern double My_variable;
extern int fact(int n);
extern int my_mod(int x, int y);
extern char *get_time();
/******************/
Another example:
/* File : example.c */

double  My_variable  = 3.0;

/* Compute factorial of n */
int fact(int n) {
  if (n <= 1)
    return 1;
  else
    return n*fact(n-1);
}

/* Compute n mod m */
int my_mod(int n, int m) {
  return(n % m);
}

/* File : example.i */
%module example
%{
/* Put headers and other declarations here */
extern double My_variable;
extern int    fact(int);
extern int    my_mod(int n, int m);
%}

extern double My_variable;
extern int    fact(int);
extern int    my_mod(int n, int m);

What if you have some pointer arguments in C ? How to deal with it in Python? 

For example:void add(double a, double b, double *result) {
 *result = a + b;
}
Please refer to this document. It will give you the nametype mapping in SWIG.http://www.swig.org/Doc1.3/Arguments.html





Wednesday, December 12, 2012

[Trema] The L2 isolation mechanism in sliceable switch

If someone has ever seen the documents about sliceable switch as below, he/she will feel headache or sick because of a lot of contents and description.
https://github.com/trema/apps/wiki/sliceable_switch_tutorial
https://github.com/trema/apps/wiki/sliceable_switch_features

Now, I will give a flow control chart of slice function which is summarized from the source code ( slice.c ). That can give you a clear image about L2 isolation mechanism in sliceable switch, specially in Slice function. Check it out as the following chart:

 So, broadly speaking, the slice function will check mac binding first, then port_mac binding, and finally port binding. Meanwhile, some configurations will affect the result, for instance, "restrict hosts on port" enabled will force the slice function to check port_mac binding, otherwise, it won't do that.
Based on this flow chart, you can compare with the test cases in https://github.com/trema/apps/wiki/sliceable_switch_features







Monday, December 10, 2012

[MongoDB] Install MongoDB and try a simple example of mongodb_c_driver

MongoDB Installation
http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian-or-ubuntu-linux/
or
https://www.digitalocean.com/community/articles/how-to-install-mongodb-on-ubuntu-12-04
For instance in my environment:
  > sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
  > sudo echo "deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen" | tee -a /etc/apt/sources.list.d/10gen.list
  > sudo apt-get -y update
  > sudo apt-get -y install mongodb-10gen

Try command on MongoDB
>mongo
MongoDB shell version: 2.2.2
connecting to: test

> show dbs
db    (empty)
local    (empty)
test    0.203125GB

 > use test
switched to db test

> show collections
foo
system.indexes

### Insert new data ###
> db.foo.save({a:1})
> doc = {
... "name" : "kristina",
... "contact info" : {
... "twitter" : "@kchodorow",
... "email" : "kristina@10gen.com"
... },
... "friends" : 400232,
... "pic" : BinData(...)
... "member since" : new Date()}
> db.foo.insert(doc)
> db.foo.save({a:2})
> db.foo.save({a:3})

### Find the data ###
> db.foo.find()
{ "_id" : ObjectId("50c592540770027c182d31b9"), "a" : 1 }
{ "_id" : ObjectId("50c59e030770027c182d31ba"), "name" : "kristina", "contact info" : { "twitter" : "@kchodorow", "email" : "kristina@10gen.com" }, "friends" : 400232, "member since" : ISODate("2012-12-10T08:30:50.389Z") }
{ "_id" : ObjectId("50c59e1f0770027c182d31bb"), "a" : 2 }
{ "_id" : ObjectId("50c59e220770027c182d31bc"), "a" : 3 }

> db.foo.findOne()
{ "_id" : ObjectId("50c592540770027c182d31b9"), "a" : 1 }
> db.foo.find({"a":1})
{ "_id" : ObjectId("50c592540770027c182d31b9"), "a" : 1 }
> db.foo.find({"name":"kristina"})
{ "_id" : ObjectId("50c59e030770027c182d31ba"), "name" : "kristina", "contact info" : { "twitter" : "@kchodorow", "email" : "kristina@10gen.com" }, "friends" : 400232, "member since" : ISODate("2012-12-10T08:30:50.389Z") }

Try a simple example of mongodb c driver
For more API info in details, please refer to this: http://api.mongodb.org/c/current/tutorial.htm
 

> gcc --std=c99 -I/usr/local/include -L/usr/local/lib -o mongodb_test mongodb_test.c -lmongoc
> ./mongodb_test
WARNING: mongo_connect() is deprecated, please use mongo_client()
MONGO_OK:connection succeeded
0


mongodb_test.c (source code)

#include <stdio.h> #include "mongo.h" int main() { mongo conn[1]; int status; status = mongo_connect( conn, "127.0.0.1", 27017 ); if( status != MONGO_OK ) { switch ( conn->err ) { case MONGO_CONN_SUCCESS: printf( "connection succeeded\n" ); break; //case MONGO_CONN_BAD_ARG: printf( "bad arguments\n" ); return 1; case MONGO_CONN_NO_SOCKET: printf( "no socket\n" ); return 1; case MONGO_CONN_FAIL: printf( "connection failed\n" ); return 1; case MONGO_CONN_NOT_MASTER: printf( "not master\n" ); return 1; } }else{ printf( "MONGO_OK:connection succeeded\n%d\n", status ); } mongo_destroy( conn ); return 0; }


[Memcached] Install memcached and try libmemcached C API


Install from package
> sudo apt-get install memcached

or Install from source code
We need to have :
  1. libevent downloaded from : http://libevent.org/ 
    • > ./configure --prefix=/usr
    • > make
    • > sudo make install
  2. memcached downloaded from : http://memcached.org/
    • > ./configure --prefix=/usr/local
    • > make
    • > sudo make install
Check the status of memcached
 > sudo service memcached status


Install libmemcached C API from source code

  1. libmemcached C API ownloaded from http://libmemcached.org/libMemcached.html
    • > ./configure --prefix=/usr
    • > make 
    • > sudo make install

Give a simple try for libmemcached C API
> gcc -o mem_test2 mem_test2.c -lmemcached -lpthread
> ./mem_test2
Save key:key1 data:"This is c first value" success.
Fetch key:key1 data:This is c first value
Delete Key key1 success.

mem_test2.c

#include <stdio.h> #include <stdlib.h> #include <string.h> #include <libmemcached/memcached.h> int main(int argc, char *argv[]) { memcached_st *memc; memcached_return rc; memcached_server_st *servers; char value[8191]; //connect multi server memc = memcached_create(NULL); servers = memcached_server_list_append(NULL, "localhost", 11211, &rc); servers = memcached_server_list_append(servers, "localhost", 11212, &rc); rc = memcached_server_push(memc, servers); memcached_server_free(servers); //Save multi data size_t i; char *keys[] = {"key1", "key2", "key3"}; size_t key_length[] = {4, 4, 4}; char *values[] = {"This is c first value", "This is c second value", "This is c third value"}; size_t val_length[] = {21, 22, 21}; for (i = 0; i < 3; i++) { rc = memcached_set(memc, keys[i], key_length[i], values[i], val_length[i], (time_t) 180, (uint32_t) 0); if (rc == MEMCACHED_SUCCESS) { printf("Save key:%s data:\"%s\" success.\n", keys[i], values[i]); } } //Fetch multi data char return_key[MEMCACHED_MAX_KEY]; size_t return_key_length; char *return_value; size_t return_value_length; uint32_t flags; rc = memcached_mget(memc, keys, key_length, 3); while ((return_value = memcached_fetch(memc, return_key, &return_key_length, &return_value_length, &flags, &rc))) { if (rc == MEMCACHED_SUCCESS) { printf("Fetch key:%s data:%s\n", return_key, return_value); } } //Delete multi data for (i = 0; i < 3; i++) { rc = memcached_set(memc, keys[i], key_length[i], values[i], val_length[i], (time_t) 180, (uint32_t) 0); rc = memcached_delete(memc, keys[i], key_length[i], (time_t) 0); if (rc == MEMCACHED_SUCCESS) { printf("Delete %s success\n", keys[i], values[i]); } } //free memcached_free(memc); return 0; }



Wednesday, December 5, 2012

[Presentation] OpenStack 2012 fall summit observation - Quantum/SDN

Taiwan OpenStack User Group (TWOSUG)3rd Meet Up is hold in Dec 5, 2012. I give a presentation in one of session, which is "OpenStack 2012 fall summit observation - Quantum/SDN". The topic is focused on Quantum and SDN and the slide is shared on shlideshare as follows:
http://www.slideshare.net/teyenliu/open-stack-2012-fall-summit-observation-with-quantumsdn-15493510

Friday, November 23, 2012

[iptables] some common examples of iptables rule


  • Read all tables without DNS lookup
    • > iptables -L -n
  • Obtain the line number of the lines: 
    •  > iptables -L -nv --line-numbers
  • Read NAT table in list without DNS lookup
    • > iptables -t nat -L -n
  • Do NAT ( SNAT )
    • > echo "1" > /proc/sys/net/ipv4/ip_forward
    • > iptables-t nat -A POSTROUTING -s ${INSIDE_NETWORK}/${INSIDE_NETMASK} -o ${OUTSIDE_DEVICE} -j MASQUERADE
    • or > iptables-t nat -A POSTROUTING -s ${INSIDE_NETWORK}/${INSIDE_NETMASK} -o ${OUTSIDE_DEVICE} -j SNAT --to ${TARGET_IP}
  • Do DNAT 
    • > iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 192.168.100.10:80
  • Drop the packet which is from 192.168.2.20 to 192.168.1.100 with TCP port 80
    • > iptables -A POSTROUTING -t nat -s 192.168.2.20 -d 192.168.1.100 -p TCP --dport 80 -j DROP
  • Accept the packet which is from 192.168.100.0/24 and interface eth1
    • > iptables -A INPUT -i eth1 -s 192.168.100.0/24 -j ACCEPT
  •  Insert a logging rule between the last one which drops packet with iptables something like this would do the trick
    • > iptables -I INPUT (next-to-the-last rule number) -j LOG --log-prefix "blocked packets : "


iptables [-AI 鏈名] [-io 網路介面] [-p 協定] \
> [-s 來源IP/網域] [-d 目標IP/網域] -j [ACCEPT|DROP|REJECT|LOG]
選項與參數:

-S:規則列表
-t:指定表格 ( nat / filter ) 不用t 則預設為 filter
-AI 鏈名:針對某的鏈進行規則的 "插入" "累加"
    -A :新增加一條規則,該規則增加在原本規則的最後面。例如原本已經有四條規則,
         使用 -A 就可以加上第五條規則!
    -I :插入一條規則。如果沒有指定此規則的順序,預設是插入變成第一條規則。
         例如原本有四條規則,使用 -I 則該規則變成第一條,而原本四條變成 2~5
    :有 INPUT, OUTPUT, FORWARD 等,此鏈名稱又與 -io 有關,請看底下。

-io 網路介面:設定封包進出的介面規範
    -i :封包所進入的那個網路介面,例如 eth0, lo 等介面。需與 INPUT 鏈配合;
    -o :封包所傳出的那個網路介面,需與 OUTPUT 鏈配合;

-p 協定:設定此規則適用於哪種封包格式
   主要的封包格式有: tcp, udp, icmp all

-s 來源 IP/網域:設定此規則之封包的來源項目,可指定單純的 IP 或包括網域,例如:
   IP  192.168.0.100
   網域:192.168.0.0/24, 192.168.0.0/255.255.255.0 均可。
   若規範為『不許』時,則加上 ! 即可,例如:
   -s ! 192.168.100.0/24 表示不許 192.168.100.0/24 之封包來源;

-d 目標 IP/網域:同 -s ,只不過這裡指的是目標的 IP 或網域。

-j :後面接動作,主要的動作有接受(ACCEPT)、丟棄(DROP)、拒絕(REJECT)及記錄(LOG)

iptables -L -n -v -x

iptables -N TRAFFIC_ACCT
iptables -I FORWARD -j TRAFFIC_ACCT
iptables -D FORWARD -j TRAFFIC_ACCT
iptables -X TRAFFIC_ACCT


iptables -A TRAFFIC_ACCT -p tcp
iptables -A TRAFFIC_ACCT -p udp
iptables -A TRAFFIC_ACCT -p icmp

Tuesday, November 20, 2012

[Mongrel2] How to write a handler for mongrel2 web server

Mongrel2 is an application, language, and network architecture agnostic web server that focuses on web applications. The most powerful functionality is to use Handler to deal with ZeroMQ message.
There are already some articles talking about Handler and how to get started, for instance:
http://www.ioncannon.net/programming/1384/example-mongrel2-handler-in-ruby/
http://brubeck.io/demos.html

I almost studied 2 days to understand the usage of handler and try a simple handler to respond the request, and finally I finished. So, in this article, I will give a concept about Mongrel2 web server and a simple example.


  • From this diagram, you will quickly realize how the mongrel2 web server works with your handler by what kind of zeromq communication type.
  • Second, I will give the handler example in details:
The conf file ==> mongrel2.conf
brubeck_handler = Handler(
    send_spec='tcp://127.0.0.1:9999',
    send_ident='34f9ceee-cd52-4b7f-b197-88bf2f0ec378',
    recv_spec='tcp://127.0.0.1:9998',
    recv_ident='')

media_dir = Dir(
    base='media/',
    index_file='index.html',
    default_ctype='text/plain')

brubeck_host = Host(
    name="localhost",
    routes={
        '/media/': media_dir,
        '/handlers': brubeck_handler})

brubeck_serv = Server(
    uuid="f400bf85-4538-4f7a-8908-67e313d515c2",
    access_log="/log/mongrel2.access.log",
    error_log="/log/mongrel2.error.log",
    chroot="./",
    default_host="localhost",
    name="brubeck test",
    pid_file="/run/mongrel2.pid",
    port=6767,
    hosts = [brubeck_host]
)

settings = { "zeromq.threads": 1 }

servers = [brubeck_serv]


Load the conf file to mongrel2
> m2sh load -config mongrel2.conf -db the.db

Run the mongrel2 web server
> m2sh start -db the.db -host localhost

Run the simple handler source code ( in Python ) ==> simple_handler.py
> python simple_handler.py

#!/usr/bin/env python
import zmq

ctx = zmq.Context()
pull = ctx.socket(zmq.PULL)
pull.connect("tcp://127.0.0.1:9999")
pub = ctx.socket(zmq.PUB)
pub.connect("tcp://127.0.0.1:9998")

while True:
    msg = pull.recv()
    print msg
    sender_uuid, client_id, request_path, request_message = msg.split(" ", 4)
    ret_content = "Hi...This is Danny..."
    ret_msg = '%s 1:%s, HTTP/1.1 200 OK\r\nContent-Length: %d\r\n\r\n%s' %
    (sender_uuid, client_id , ret_content.__len__(), ret_content)
    pub.send(ret_msg, 0)




Use curl to test it and will get the result as follows:
> curl http://localhost:6767/handlers
Hi...This is Danny...



Monday, November 12, 2012

[ZeroMQ] the study note of the ØMQ guilde -- Part II

Shared Queue ( DEALER and ROUTER sockets)
The only constraint is that services must be stateless, all state being in the request or in some shared storage such as a database.
It then uses zmq-poll[3] to monitor these two sockets for activity and when it has some, it shuttles messages between its two sockets. It doesn't actually manage any queues explicitly — ØMQ does that automatically on each socket.

Built-in Proxy Function
Please see the examle of msgqueue.c that replaces rrbroker.c with built-in proxy function.

Transport Bridging
fig18.png

Handling Errors
In most of the C examples we've seen so far there's been no error handling. Real code should do error handling on every single ØMQ call.
void *context = zmq-ctx-new ();
assert (context);
void *socket = zmq-socket (context, ZMQ-REP);
assert (socket);
int rc = zmq-bind (socket, "tcp://*:5555");
if (rc != 0) {
printf ("E: bind failed: %s\n", strerror (errno));
return -1;
}

We'll use a publish-subscribe model to send kill messages to the workers:
  • The sink creates a PUB socket on a new endpoint.
  • Workers bind their input socket to this endpoint.
  • When the sink detects the end of the batch it sends a kill to its PUB socket.
  • When a worker detects this kill message, it exits.
It doesn't take much new code in the sink:
void *control = zmq-socket (context, ZMQ-PUB);
zmq-bind (control, "tcp://*:5559");

//Send kill signal to workers
zmq-msg-init-data (&message, "KILL", 5);
zmq-msg-send (control, &message, 0);
zmq-msg-close (&message);
Handling Interrupt Signals
s-catch-signals ();
client = zmq-socket (...);
while (!s-interrupted) {
    char *message = s-recv (client);
    if (!message)
        break;          //  Ctrl-C used
}
zmq-close (client);

Detecting Memory Leaks
valgrind: sudo apt-get install valgrind

Multithreading with ØMQ
If you've spent years learning tricks to make your MT code work at all, let alone rapidly, with locks and semaphores and critical sections, you will be disgusted when you realize it was all for nothing. If there's one lesson we've learned from 30+ years of concurrent programming it is: just don't share state.
You should follow some rules to write happy multithreaded code with ØMQ:
  • You must not access the same data from multiple threads. Using classic MT techniques like mutexes are an anti-pattern in ØMQ applications. The only exception to this is a ØMQ context object, which is threadsafe.
  • You must create a ØMQ context for your process, and pass that to all threads that you want to connect via inproc sockets.
  • You may treat threads as separate tasks, with their own context, but these threads cannot communicate over inproc. However they will be easier to break into standalone processes afterwards.
  • You must not share ØMQ sockets between threads. ØMQ sockets are not threadsafe. Technically it's possible to do this, but it demands semaphores, locks, or mutexes. This will make your application slow and fragile. The only place where it's remotely sane to share sockets between threads are in language bindings that need to do magic like garbage collection on sockets.

Singaling between Threads ( PAIR sockets )
the only mechanism that you should use are ØMQ messages.
This is a classic pattern for multithreading with ØMQ ( The Relay Race ) :
  1. Two threads communicate over inproc, using a shared context.
  2. The parent thread creates one socket, binds it to an inproc:// endpoint, and then starts the child thread, passing the context to it.
  3. The child thread creates the second socket, connects it to that inproc:// endpoint, and then signals to the parent thread that it's ready.
fig21.png
mtrelay.c : multithreaded relay in C
For these reasons, PAIR makes the best choice for coordination between pairs of threads.

Node Coordination
syncpub.c: Synchronized publisher in C
  • The publisher knows in advance how many subscribers it expects. This is just a magic number it gets from somewhere.
  • The publisher starts up and waits for all subscribers to connect. This is the node coordination part. Each subscriber subscribes and then tells the publisher it's ready via another socket.
  • When the publisher has all subscribers connected, it starts to publish data.

Zero Copy
ØMQ's message API lets you can send and receive messages directly from and to application buffers without copying data. We call "zero-copy", and it can improve performance in some applications.
There is no way to do zero-copy on receive: ØMQ delivers you a buffer that you can store as long as you wish but it will not write data directly into application buffers.

Pub-Sub Message Envelops
Pub-Sub Envelope with Separate Key
fig23.png
psenvpub.c: Pub-Sub envelop publisher in C
s_sendmore (publisher, "B");
s_send (publisher, "We would like to see this");
psenvsub.c: Pub-Sub envelope subscriber in C
//Read envelope with address
char *address = s_recv (subscriber);
//Read message contents
char *contents = s_recv (subscriber);
printf ("[%s] %s\n", address, contents);

High Water Marks
When you can send messages rapidly from process to process, you soon discover that memory is a precious resource, and one that can be trivially filled up.
The answer for messaging is to set limits on the size of buffers, and then when we reach those limits, take some sensible action. In most cases (not for a subway system, though), the answer is to throw away messages. In a few others, it's to wait.
In ØMQ/2.x the HWM was infinite by default. In ØMQ/3.x it's set to 1,000 by default, which is more sensible.
Some sockets (PUB, PUSH) only have transmit buffers. Some (SUB, PULL, REQ, REP) only have receive buffers. Some (DEALER, ROUTER, PAIR) have both transmit and receive buffers.