Sunday, August 31, 2014

[Lagopus] Install Lagopus software switch on Ubuntu 12.04

 I attended NTT ( Ryu/Lagopus ) seminar in Aug. at NCTU, Taiwan and noticed that Lagopus(SDN/OpenFlow Software Switch) is amazing.
Its L2 switch performance with 10GbE X 2 (RFC2889 test ) for most of packet size are near to 10Gbps and the test platform is 
Intel xeon E5-2660 (8 cores, 16 threads), Intel X520-DA2 DDR3-1600 64GB. 
For more information please see the attachment pictures that I took from the seminar.

The following is the features:
  • Best OpenFlow 1.3 compliant software-based switch
    • Multi tables, Group tables support
    • MPLS, PBB, QinQ, support
  • ONF standard specification support
    • OpenFlow Switch Specification 1.3.3
    • OF-CONFIG 1.1
  • Multiple data-plane configuration
    • High performance software data-plane on Intel x86 bare-metal server
      • Intel DPDK, Raw socket
    • Bare metal switch
  • Various management/configuration interfaces
    • OF-CONFIG, OVSDB, CLI
    • SNMP, Ethernet-OAM functionality

For installing Lagopus switch, you can refer to the following URL. It can give us a common installation guide for Lagopus switch.
https://github.com/lagopus/lagopus/blob/master/QUICKSTART.md


About my lagopus environment:sudo vi /usr/local/etc/lagopus/lagopus.conf
 interface {  
   ethernet {  
     eth0;  
     eth1;  
     eth2;  
   }  
 }  
 bridge-domains {  
   br0 {  
     port {  
       eth0;  
       eth1;  
       eth2;  
     }  
     controller {  
       127.0.0.1;  
     }  
   }  
 }  


But, I put 2 shell scripts for quickly installing and setting up the DPDK. I use DPDK-1.6.0 so that all the scripts are based on this version.

compile_dpdk.sh
 #!/bin/sh  
 export RTE_SDK=/home/myname/git/DPDK-1.6.0  
 export RTE_TARGET="x86_64-default-linuxapp-gcc"  
 make config T=${RTE_TARGET}  
 make install T=${RTE_TARGET}  

install_dpdk.sh
 #!/bin/sh  
 export RTE_SDK=/home/myname/git/DPDK-1.6.0
 export RTE_TARGET="x86_64-default-linuxapp-gcc"  
 DPDK_NIC_PCIS="0000:00:08.0 0000:00:09.0 0000:00:0a.0"  
 HUGEPAGE_NOPAGES="1024"  
 set_numa_pages()  
 {  
     for d in /sys/devices/system/node/node? ; do  
         sudo sh -c "echo ${HUGEPAGE_NOPAGES} > $d/hugepages/hugepages-2048kB/nr_hugepages"  
     done  
 }  
 set_no_numa_pages()  
 {  
     sudo sh -c "echo ${HUGEPAGE_NOPAGES} > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages"  
 }  
 # install module  
 sudo modprobe uio  
 sudo insmod ${RTE_SDK}/${RTE_TARGET}/kmod/igb_uio.ko  
 sudo insmod ${RTE_SDK}/${RTE_TARGET}/kmod/rte_kni.ko  
 # unbind e1000 NICs from igb and bind igb_uio for DPDK  
 sudo ${RTE_SDK}/tools/pci_unbind.py --bind=igb_uio ${DPDK_NIC_PCIS}  
 sudo ${RTE_SDK}/tools/pci_unbind.py --status  
 # mount fugepagefs  
 echo "Set hugepagesize=${HUGEPAGE_NOPAGES} of 2MB page"  
 NCPUS=$(find /sys/devices/system/node/node? -maxdepth 0 -type d | wc -l)  
 if [ ${NCPUS} -gt 1 ] ; then  
     set_numa_pages  
 else  
     set_no_numa_pages  
 fi  
 echo "Creating /mnt/huge and mounting as hugetlbfs"  
 sudo mkdir -p /mnt/huge  
 grep -s '/mnt/huge' /proc/mounts > /dev/null  
 if [ $? -ne 0 ] ; then  
     sudo mount -t hugetlbfs nodev /mnt/huge  
 fi  
 unset RTE_SDK  
 unset RTE_TARGET  

Here is one thing needs to be notice. 
The variable DPDK_NIC_PCIS is my Linux eth1, eth2, and eth3's bus info as follows:
DPDK_NIC_PCIS="0000:00:08.0 0000:00:09.0 0000:00:0a.0"
You have to change them by running ethtool to see your eth bus info.

So, we need to use the comand "ethtool" to find out the NIC's bus-info as follows:
# ethtool -i eth4
driver: igb
version: 5.0.5-k
firmware-version: 3.11, 0x8000046e
bus-info: 0000:02:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no


After executing the 2 shell scripts, then we can start the lagopus switch by this:
sudo lagopus -d -- -c3 -n1 -- -p3


Monday, August 4, 2014

[Indigo] The architecture of Indigo 2.0

After checking with the source code of Indigo 2.0, OF-DPA (CDP) ,and IVS on GitHub, I just draw a simple architecture diagram to show the idea of hardware abstraction layer (HAL). I think the most important part is that Big Switches uses this HAL concept as hardware agnostic to adopt the different forwarding engine / port management implementation in different hardware or platform.


[RYU] Try the RYU Web GUI with Mininet

This post is about the displaying of RYU Web GUI. We can see what the GUI looks like. My environment is with 2 virtual machines running on Virtula-Box. I skep the installation guide with RYU and GUI because there is already some documents to tell how to do so. If interested, please check there:
http://blog.linton.tw/posts/2014/02/15/note-install-ryu-36-sdn-framework
http://blog.linton.tw/posts/2014/02/11/note-how-to-set-up-ryu-controller-with-gui-component

P.S: Maybe need to do this:
pip install --upgrade pip or pip install -U pip

First, I started with my RYU server and executed the command:

  • > ryu-manager --verbose --observe-links ryu.topology.switches ryu.app.rest_topology ryu.app.ofctl_rest ryu.app.simple_switch

P.S: Currently the GUI doesn't support OF1.3.

Second, open another console to execute this command under your ryu directory. It is a middle-ware between Web and Controller.

  • > ./ryu/gui/controller.py


For Mininet, I just downloaded the Mininet Virtual Machine and directed to use it. The following command can generate the 3 tiers network topology quickly.

>  sudo mn --controller=remote,ip=10.3.207.81 --topo tree,3


Back to the RYU server, open the browser with the URL: http://127.0.0.1:8000/