Sunday, August 31, 2014

[Lagopus] Install Lagopus software switch on Ubuntu 12.04

 I attended NTT ( Ryu/Lagopus ) seminar in Aug. at NCTU, Taiwan and noticed that Lagopus(SDN/OpenFlow Software Switch) is amazing.
Its L2 switch performance with 10GbE X 2 (RFC2889 test ) for most of packet size are near to 10Gbps and the test platform is 
Intel xeon E5-2660 (8 cores, 16 threads), Intel X520-DA2 DDR3-1600 64GB. 
For more information please see the attachment pictures that I took from the seminar.

The following is the features:
  • Best OpenFlow 1.3 compliant software-based switch
    • Multi tables, Group tables support
    • MPLS, PBB, QinQ, support
  • ONF standard specification support
    • OpenFlow Switch Specification 1.3.3
    • OF-CONFIG 1.1
  • Multiple data-plane configuration
    • High performance software data-plane on Intel x86 bare-metal server
      • Intel DPDK, Raw socket
    • Bare metal switch
  • Various management/configuration interfaces
    • OF-CONFIG, OVSDB, CLI
    • SNMP, Ethernet-OAM functionality

For installing Lagopus switch, you can refer to the following URL. It can give us a common installation guide for Lagopus switch.
https://github.com/lagopus/lagopus/blob/master/QUICKSTART.md


About my lagopus environment:sudo vi /usr/local/etc/lagopus/lagopus.conf
 interface {  
   ethernet {  
     eth0;  
     eth1;  
     eth2;  
   }  
 }  
 bridge-domains {  
   br0 {  
     port {  
       eth0;  
       eth1;  
       eth2;  
     }  
     controller {  
       127.0.0.1;  
     }  
   }  
 }  


But, I put 2 shell scripts for quickly installing and setting up the DPDK. I use DPDK-1.6.0 so that all the scripts are based on this version.

compile_dpdk.sh
 #!/bin/sh  
 export RTE_SDK=/home/myname/git/DPDK-1.6.0  
 export RTE_TARGET="x86_64-default-linuxapp-gcc"  
 make config T=${RTE_TARGET}  
 make install T=${RTE_TARGET}  

install_dpdk.sh
 #!/bin/sh  
 export RTE_SDK=/home/myname/git/DPDK-1.6.0
 export RTE_TARGET="x86_64-default-linuxapp-gcc"  
 DPDK_NIC_PCIS="0000:00:08.0 0000:00:09.0 0000:00:0a.0"  
 HUGEPAGE_NOPAGES="1024"  
 set_numa_pages()  
 {  
     for d in /sys/devices/system/node/node? ; do  
         sudo sh -c "echo ${HUGEPAGE_NOPAGES} > $d/hugepages/hugepages-2048kB/nr_hugepages"  
     done  
 }  
 set_no_numa_pages()  
 {  
     sudo sh -c "echo ${HUGEPAGE_NOPAGES} > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages"  
 }  
 # install module  
 sudo modprobe uio  
 sudo insmod ${RTE_SDK}/${RTE_TARGET}/kmod/igb_uio.ko  
 sudo insmod ${RTE_SDK}/${RTE_TARGET}/kmod/rte_kni.ko  
 # unbind e1000 NICs from igb and bind igb_uio for DPDK  
 sudo ${RTE_SDK}/tools/pci_unbind.py --bind=igb_uio ${DPDK_NIC_PCIS}  
 sudo ${RTE_SDK}/tools/pci_unbind.py --status  
 # mount fugepagefs  
 echo "Set hugepagesize=${HUGEPAGE_NOPAGES} of 2MB page"  
 NCPUS=$(find /sys/devices/system/node/node? -maxdepth 0 -type d | wc -l)  
 if [ ${NCPUS} -gt 1 ] ; then  
     set_numa_pages  
 else  
     set_no_numa_pages  
 fi  
 echo "Creating /mnt/huge and mounting as hugetlbfs"  
 sudo mkdir -p /mnt/huge  
 grep -s '/mnt/huge' /proc/mounts > /dev/null  
 if [ $? -ne 0 ] ; then  
     sudo mount -t hugetlbfs nodev /mnt/huge  
 fi  
 unset RTE_SDK  
 unset RTE_TARGET  

Here is one thing needs to be notice. 
The variable DPDK_NIC_PCIS is my Linux eth1, eth2, and eth3's bus info as follows:
DPDK_NIC_PCIS="0000:00:08.0 0000:00:09.0 0000:00:0a.0"
You have to change them by running ethtool to see your eth bus info.

So, we need to use the comand "ethtool" to find out the NIC's bus-info as follows:
# ethtool -i eth4
driver: igb
version: 5.0.5-k
firmware-version: 3.11, 0x8000046e
bus-info: 0000:02:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no


After executing the 2 shell scripts, then we can start the lagopus switch by this:
sudo lagopus -d -- -c3 -n1 -- -p3


Monday, August 4, 2014

[Indigo] The architecture of Indigo 2.0

After checking with the source code of Indigo 2.0, OF-DPA (CDP) ,and IVS on GitHub, I just draw a simple architecture diagram to show the idea of hardware abstraction layer (HAL). I think the most important part is that Big Switches uses this HAL concept as hardware agnostic to adopt the different forwarding engine / port management implementation in different hardware or platform.


[RYU] Try the RYU Web GUI with Mininet

This post is about the displaying of RYU Web GUI. We can see what the GUI looks like. My environment is with 2 virtual machines running on Virtula-Box. I skep the installation guide with RYU and GUI because there is already some documents to tell how to do so. If interested, please check there:
http://blog.linton.tw/posts/2014/02/15/note-install-ryu-36-sdn-framework
http://blog.linton.tw/posts/2014/02/11/note-how-to-set-up-ryu-controller-with-gui-component

P.S: Maybe need to do this:
pip install --upgrade pip or pip install -U pip

First, I started with my RYU server and executed the command:

  • > ryu-manager --verbose --observe-links ryu.topology.switches ryu.app.rest_topology ryu.app.ofctl_rest ryu.app.simple_switch

P.S: Currently the GUI doesn't support OF1.3.

Second, open another console to execute this command under your ryu directory. It is a middle-ware between Web and Controller.

  • > ./ryu/gui/controller.py


For Mininet, I just downloaded the Mininet Virtual Machine and directed to use it. The following command can generate the 3 tiers network topology quickly.

>  sudo mn --controller=remote,ip=10.3.207.81 --topo tree,3


Back to the RYU server, open the browser with the URL: http://127.0.0.1:8000/   








Monday, June 23, 2014

[OF-DPA] Glance at the source code of OF-DPA on GitHub

I just quickly glance at the code of OF-DPA and draw a skeleton of it. Actually it is only about the integration Indigo with OF-DPA API ( blue color part ), and lock in with SDK and switch device. There is no OF-DPA/SDK source code ( red color part ). As the diagram described, the OEM/ODM Development Package has the full source code distributed under Broadcom SLA.



Wednesday, June 18, 2014

Monday, May 19, 2014

[Ruby] Cross-Compile Ruby to MIPS platform

I have spent several days to deal with cross-compiling Ruby to MIPS platform and have encountered some problems that bother me for a while. Hopefully I finish all the problems and get work done. Awesome!
For the sake of avoiding these kind of problems, I give the steps and scripts about how to do it:

  • Cross Compile OpenSSL

>./config --prefix=$PWD/build --cross-compile-prefix=/home/liudanny/git/NL/toolchains_bin/mipscross/linux/bin/mips64-nlm-linux-elf32btsmip- shared no-asm

We need to check the Makefile with “PLATFORM=mips” and without “-m64”

>make 2>&1 | tee make.out; make install

  • Cross Compile zlib
>CC=your_cross_compile_gcc ./configure --prefix=$PWD/build
>make 2>&1 | tee make.out; make install


  • Cross Compile Berkeley DB

>CC=your_cross_compile_gcc ../dist/configure --prefix=$PWD/build
>make 2>&1 | tee make.out; make install


  • Cross Compile OpenLDAP

>CC=your_cross_compile_gcc LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/liudanny/git/db-6.0.30.NC/build_unix/build/lib:/home/liudanny/git/openssl-1.0.1e/build/ssl/lib LDFLAGS=" -L/home/liudanny/git/db-6.0.30.NC/build_unix/build/lib -L/home/liudanny/git/openssl-1.0.1e/build/ssl/lib" CPPFLAGS=" -I/usr/local/include -I/home/liudanny/git/db-6.0.30.NC/build_unix/build/include -I/home/liudanny/git/openssl-1.0.1e/build/ssl/include" ./configure --prefix=$PWD/mybuild/ --enable-bdb --enable-crypt --host=mips64-nlm-linux --with-yielding_select=yes –with-tls=openssl

Before calling make, commenting out the line in include/portable.h ?
// #define NEED_MEMCMP_REPLACEMENT 1

Modify build/shtool to avoid from stripping error ?
Goto line 980 and find:

if [ “.$opt_s” = .yes ]; then
    if [ “.$opt_t” = .yes ]; then
    echo “strip $dsttmp” 1>&2
    fi
    strip $dsttmp || shtool_exit $?
    fi
 
    Change to the following ?
 
    if [ “.$opt_s” = .yes ]; then
    if [ “.$opt_t” = .yes ]; then
    echo “arm-none-linux-gnueabi-strip $dsttmp” 1>&2
    fi
    arm-none-linux-gnueabi-strip $dsttmp || shtool_exit $?
    fi

>make depend; make; make install


  • Cross Compile Ruby 1.8.7

Before cross-compiling Ruby, it must compiles and builds on the server first, then cross-compiling will use its own Ruby to generate the Makefile and other important steps.
Due to Ruby can have many extension libraries, we need to modify Setup.emx to enable the ext libs that we need. And also, copy the needed ruby-ext-libs into ext/ directory In the picture, we add and enable “shadow”, “openssl”, “socket”, “zlib”, and “ldap” ext libs as follows:

>export ac_cv_func_getpgrp_void=yes
>export ac_cv_func_setpgrp_void=yes
>export PATH=/home/liudanny/ruby-1.8.7-p352/build/bin:$PATH
>CC=your_cross_compile_gcc ./configure --prefix=$PWD/build/ --host=mips64-nlm-linux --with-openssl-dir=/home/liudanny/git/openssl-1.0.1e/build --disable-ipv6 --with-openssl-dir=/home/liudanny/git/openssl-1.0.1e/build --with-zlib-dir=/home/liudanny/git/zlib-1.2.5/build --with-ldap-dir=/home/liudanny/git/openldap-2.4.39/mybuild 2>&1 | tee config.out
>make 2>&1 | tee make.out; make install

Friday, March 14, 2014

[Cross-Compile] What's the difference of `./configure` option `--build`, `--host` and `--target`?

When using ./configure especially in cross-compiling purpose, I kind of confuse about the option --build and --host so that the following content is what I found on searching:

some remarks on specifying --host=<host>, --target=<target> and --build=<build
# kindly provided by Keith Marshall:
# 1) build
# this is *always* the platform on which you are running the build
# process; since we are building on Linux, this is unequivocally going to
# specify `linux', with the canonical form being `i686-pc-linux-gnu'.
#
# 2) host
# this is a tricky one: it specifies the platform on which whatever we
# are building is going to be run; for the cross-compiler itself, that's
# also `i686-pc-linux-gnu', but when we get to the stage of building the
# runtime support libraries to go with that cross-compiler, they must
# contain code which will run on the `i686-pc-mingw32' host, so the `host'
# specification should change to this, for the `runtime' and `w32api'
# stages of the build.
#
# 3) target
# this is probably the one which causes the most confusion; it is only
# relevant when building a cross-compiler, and it specifies where the code
# which is built by that cross-compiler itself will ultimately run; it
# should not need to be specified at all, for the `runtime' or `w32api',
# since these are already targetted to `i686-pc-mingw32' by a correct
# `host' specification.

And I found an answer after posting this question.. Still posting it here in case it helps someone else in the future.
http://jingfenghanmax.blogspot.in/2010/09/configure-with-host-target-and-build.html

As per this blog in my case
build will be i686-pc-linux-gnu ( My PC)
host will be mipsel-linux ( The platform I am going to run my code on)
target will be used if I am building a cross-compiling toolchain.
Since I am not building a toolchain, I didnt have to specify target.

 You will have to cross-compile libusb and then copy the library and
header files to a location where your toolchain can locate them. In
the case of CodeSourcery, you can put them in
cs_root/arm-none-linux-gnueabi/lib and
cs_root/arm-none-linux-gnueabi/include for example. You will also need
the library on the target's root filesystem unless you link it
statically, please mind the licencing implications if you do though.

Wednesday, March 12, 2014

[NETCONF] The summary of NETCONF Content

The following content is about the summary of the NET-CONF web site: 
http://www.netconfcentral.org/netconf_docs

Session Initiation For Clients

   <?xml version="1.0" encoding="UTF-8"?>
          <hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
            <capabilities>
       <capability>urn:ietf:params:netconf:base:1.0</capability>
            </capabilities>
          </hello>]]>]]>

Protocol Capabilities

<capability>
    urn:ietf:params:netconf:capability:writable-running:1.0
</capability>

 <capability>urn:ietf:params:netconf:base:1.0</capability>

Standard Capabilities

:candidate
:confirmed-commit
:interleave
:notification
:partial-lock
:rollback-on-error
:startup
:url
:validate
:writable-running
:xpath

Configuration Databases

<running/>
<candidate/>
<startup/>

Protocol Operations

Once a NETCONF session is established, the client knows which capabilities the server supports. The client then can send RPC method requests and receive RPC replies from the server. The server's request queue is serialized, so requests will be processed in the order received.

OperationUsageDescription
close-session:baseTerminate this session
commit:base AND :candidateCommit the contents of the <candidate/> configuration database to the <running/> configuration database
copy-config:baseCopy a configuration database
create-subscription:notificationCreate a NETCONF notification subscription
delete-config:baseDelete a configuration database
discard-changes:base AND :candidateClear all changes from the <candidate/> configuration database and make it match the <running/> configuration database
edit-config:baseModify a configuration database
get:baseRetrieve data from the running configuration database and/or device statistics
get-config:baseRetrieve data from the running configuration database
kill-session:baseTerminate another session
lock:baseLock a configuration database so only my session can write
unlock:baseUnlock a configuration database so any session can write
validate:base AND :validateValidate the entire contents of a configuration database