Showing posts with label SDN. Show all posts
Showing posts with label SDN. Show all posts

Thursday, June 2, 2016

[Neutron and SDN] Warm-up for understaning the integration of Neutron and SDN

I just spend some time to study the integration of Neutron and SDN. Furthermore, I also take a look at how ODL and ONOS are integrated with OpenStack Neutron. The following content and picture ( with URL link) are excerpted from the variety of resource in the internet. Some of sections has my comment with P.S. I think it can give you a clear concept of Neutron and SDN controller.

Neutron and SDN
P.S: This picture gives an overall architecture about Neutron and SDN controller that are integrated together.


When an OpenStack user performs any networking related operation (create/update/delete/read on network, subnet and port resources) the typical flow would be as follows:
  1. The user operation on the OpenStack dashboard (Horizon) will be translated into a corresponding networking API and sent to the Neutron server.
  2. The Neutron server receives the request and passes the same to the configured plugin (assume ML2 is configured with an ODL mechanism driver and a VXLAN type driver).
  3. The Neutron server/plugin will make the appropriate change to the DB.
  4. The plugin will invoke the corresponding REST API to the SDN controller (assume an ODL).
  5. ODL, upon receiving this request, may perform necessary changes to the network elements using any of the southbound plugins/protocols, such as OpenFlow, OVSDB or OF-Config.




We should note there exists different integration options with the SDN controller and OpenStack; for example:
  1. one can completely eliminate RPC communications between the Neutron server and agents on the compute node, with the SDN controller being the sole entity managing the network
  2. or the SDN controller manages only the physical switches, and the virtual switches can be managed from the Neutron server directly.


Neutron底層網絡的兩種模型示意如下。
第一種模型a) Neutron相當於SDN控制器,plugin與agent間的通信機制(如rpc)就相當於簡單的南向協議。第二種模型中Neutron作為SDN應用,將業務需求告知SDN控制器,SDN控制器再通過五花八門的南向協議遠程控製網絡設備。
第二種模型b) 也可以把Neutron看做超級控制器或者網絡編排器,去完成OpenStack中網絡業務的集中分發。

Plugin Agent

P.S: This picture shows the process flow between agents, api server and ovs when creating a VM.
http://www.innervoice.in/blogs/wp-content/uploads/2015/03/Plugin-Agents.jpg


About ML2



Neutron plugin体系


How OpenDaylight to integrate with Neutron



How is ONOS to integrate with Neutron

SONA Architecture



onos-networking plugin just forwards (or calls) REST calls from nova to ONOS, and OpenstackSwitching app receives the API calls and returns OK. Main functions to implement the virtual networks are handled in OpenstackSwitching application.

OpenstackSwitching (App on ONOS)

Neutron + SDN Controller (ONOS)  

P.S: ONOS provides its ONOSMechanismDriver instead of OpenvswitchMechanismDriver


Reference:
Here is an article to talk about writing a dummy mechanism driver to record variables and data in logs
http://blog.csdn.net/yanheven1/article/details/47357537

Sunday, August 31, 2014

[Lagopus] Install Lagopus software switch on Ubuntu 12.04

 I attended NTT ( Ryu/Lagopus ) seminar in Aug. at NCTU, Taiwan and noticed that Lagopus(SDN/OpenFlow Software Switch) is amazing.
Its L2 switch performance with 10GbE X 2 (RFC2889 test ) for most of packet size are near to 10Gbps and the test platform is 
Intel xeon E5-2660 (8 cores, 16 threads), Intel X520-DA2 DDR3-1600 64GB. 
For more information please see the attachment pictures that I took from the seminar.

The following is the features:
  • Best OpenFlow 1.3 compliant software-based switch
    • Multi tables, Group tables support
    • MPLS, PBB, QinQ, support
  • ONF standard specification support
    • OpenFlow Switch Specification 1.3.3
    • OF-CONFIG 1.1
  • Multiple data-plane configuration
    • High performance software data-plane on Intel x86 bare-metal server
      • Intel DPDK, Raw socket
    • Bare metal switch
  • Various management/configuration interfaces
    • OF-CONFIG, OVSDB, CLI
    • SNMP, Ethernet-OAM functionality

For installing Lagopus switch, you can refer to the following URL. It can give us a common installation guide for Lagopus switch.
https://github.com/lagopus/lagopus/blob/master/QUICKSTART.md


About my lagopus environment:sudo vi /usr/local/etc/lagopus/lagopus.conf
 interface {  
   ethernet {  
     eth0;  
     eth1;  
     eth2;  
   }  
 }  
 bridge-domains {  
   br0 {  
     port {  
       eth0;  
       eth1;  
       eth2;  
     }  
     controller {  
       127.0.0.1;  
     }  
   }  
 }  


But, I put 2 shell scripts for quickly installing and setting up the DPDK. I use DPDK-1.6.0 so that all the scripts are based on this version.

compile_dpdk.sh
 #!/bin/sh  
 export RTE_SDK=/home/myname/git/DPDK-1.6.0  
 export RTE_TARGET="x86_64-default-linuxapp-gcc"  
 make config T=${RTE_TARGET}  
 make install T=${RTE_TARGET}  

install_dpdk.sh
 #!/bin/sh  
 export RTE_SDK=/home/myname/git/DPDK-1.6.0
 export RTE_TARGET="x86_64-default-linuxapp-gcc"  
 DPDK_NIC_PCIS="0000:00:08.0 0000:00:09.0 0000:00:0a.0"  
 HUGEPAGE_NOPAGES="1024"  
 set_numa_pages()  
 {  
     for d in /sys/devices/system/node/node? ; do  
         sudo sh -c "echo ${HUGEPAGE_NOPAGES} > $d/hugepages/hugepages-2048kB/nr_hugepages"  
     done  
 }  
 set_no_numa_pages()  
 {  
     sudo sh -c "echo ${HUGEPAGE_NOPAGES} > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages"  
 }  
 # install module  
 sudo modprobe uio  
 sudo insmod ${RTE_SDK}/${RTE_TARGET}/kmod/igb_uio.ko  
 sudo insmod ${RTE_SDK}/${RTE_TARGET}/kmod/rte_kni.ko  
 # unbind e1000 NICs from igb and bind igb_uio for DPDK  
 sudo ${RTE_SDK}/tools/pci_unbind.py --bind=igb_uio ${DPDK_NIC_PCIS}  
 sudo ${RTE_SDK}/tools/pci_unbind.py --status  
 # mount fugepagefs  
 echo "Set hugepagesize=${HUGEPAGE_NOPAGES} of 2MB page"  
 NCPUS=$(find /sys/devices/system/node/node? -maxdepth 0 -type d | wc -l)  
 if [ ${NCPUS} -gt 1 ] ; then  
     set_numa_pages  
 else  
     set_no_numa_pages  
 fi  
 echo "Creating /mnt/huge and mounting as hugetlbfs"  
 sudo mkdir -p /mnt/huge  
 grep -s '/mnt/huge' /proc/mounts > /dev/null  
 if [ $? -ne 0 ] ; then  
     sudo mount -t hugetlbfs nodev /mnt/huge  
 fi  
 unset RTE_SDK  
 unset RTE_TARGET  

Here is one thing needs to be notice. 
The variable DPDK_NIC_PCIS is my Linux eth1, eth2, and eth3's bus info as follows:
DPDK_NIC_PCIS="0000:00:08.0 0000:00:09.0 0000:00:0a.0"
You have to change them by running ethtool to see your eth bus info.

So, we need to use the comand "ethtool" to find out the NIC's bus-info as follows:
# ethtool -i eth4
driver: igb
version: 5.0.5-k
firmware-version: 3.11, 0x8000046e
bus-info: 0000:02:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no


After executing the 2 shell scripts, then we can start the lagopus switch by this:
sudo lagopus -d -- -c3 -n1 -- -p3


Monday, February 24, 2014

[Thoughts] Cumulus Networks and Big Switch Networks

These two companies, Cumulus Networks and Big Switch Networks, are two of  my most favorite network companies in the world because they have a strong technical skill and creative ability/thought to build their networking products. Unfortunately, they walk in two different paths and directions. Big Switch Networks is based on OpenFlow but Cumulus Networks is not. For more information in details, please see below:

http://www.jedelman.com/1/post/2014/02/big-switch-cumulus-and-openflow.html
http://vimeo.com/87216036

[SDN} SDN Migration Use Cases

This document provides three migration use cases and I think they are very useful for those who work in networking field and are interested in SDN and need to take a look at. Here you go:
http://www.businesswire.com/news/home/20140211005653/en/Open-Networking-Foundation-Publishes-Open-SDN-Migration

Friday, April 19, 2013

[SDN] Networking in the cloud: An SDN primer

Ben Cherian, Chief Strategy Officer for Midokura, give a session talk in OpenStack Summit 2013 and the topic is "Networking in the cloud: An SDN primer". I didn't attend this Summit, but someone has summarized the points here:
http://blog.scottlowe.org/2013/04/16/openstack-summit-2013-networking-in-the-cloud-an-sdn-primer/

Furthermore, if you want to know more about what MidoNet truely is, you can check out these:
http://bradhedlund.com/2012/10/06/mind-blowing-l2-l4-network-virtualization-by-midokura-midonet/

















http://blog.ioshints.info/2012/08/midokuras-midonet-layer-2-4-virtual.html




Monday, January 21, 2013

[SDN] The news about Juniper to build its own software-defined networking stack

Software Defined Networking is becoming hotter and hotter topic from last year. Properly networking guys will talk about Software-defined Networking (SDN) everywhere this year. The event that VMware bought Nicira became a catalyst for some related Network Vendors to think about what the strategy and tactic they should have and deal with this wave of SDN. It let me think of this song ( Blondie – The Tide Is High )
  The tide is high but I'm holding on ...
   I'm gonna be your number one ...

   ...

In sum, SDN is important and key to survive this time in the networking world. Here is an example about that Juniper takes an action to provide their SDN approach. I excerpt some paragraph as follows:


Quota from the news:
http://www.theregister.co.uk/2013/01/16/juniper_sdn_strategy/

Juniper's approach to SDN, explained Muglia, will break the monolithic software stack inside of switches and routers – for campuses, branches, and service providers and not just for data centers – into four different planes: management, services, control, and forwarding.

This software will run on a virtualization layer called JunosV App Engine – which sounds a lot like a KVM hypervisor container for an x86 server, but Juniper did not say. This virtualization software will ship later in the first quarter of this year.

Central to that SDN stack is Contrail, a startup that was just getting ready to uncloak last month with its SDN wares when Juniper swept in with $176m in cash and snapped it up. Contrail was founded by a team of networking and software platform experts from Google, Juniper, Cisco, and Aruba Networks, and significantly had former Juniper CTO and chief architect Kireeti Kompella as its CTO. Now Kompella is back at Juniper, and is central to its SDN strategy.

Wednesday, December 5, 2012

[Presentation] OpenStack 2012 fall summit observation - Quantum/SDN

Taiwan OpenStack User Group (TWOSUG)3rd Meet Up is hold in Dec 5, 2012. I give a presentation in one of session, which is "OpenStack 2012 fall summit observation - Quantum/SDN". The topic is focused on Quantum and SDN and the slide is shared on shlideshare as follows:
http://www.slideshare.net/teyenliu/open-stack-2012-fall-summit-observation-with-quantumsdn-15493510

Monday, August 27, 2012

[SDN] HP's SDN solution

http://www.infoworld.com/d/networking/hp-aims-three-part-effort-network-virtualization-200048?source=IFWNLE_nlt_virtualization_2012-08-16

Summary:
HP provides 3 tools (softwares) to deal with network virtualization using SDN
  • EVI (Ethernet Virtual Interconnect)
    • EVI creates a tunnel through the Layer 3 network by encapsulating the packets traveling between the data centers. Rival Cisco already has software that can do this, called OTV (Overlay Transport Virtualization), but it charges extra for that software.
  • MDC (Multitenant Device Context)
    • It allows for segregating the resources of multiple tenants in a virtualized environment without buying separate switches. This secures the data and applications of one department or cloud service customer from other tenants.
  • VSA
    • StoreVirtual VSA, for virtualizing storage management. The software is based on VSA (virtual storage appliance) technology from LeftHand Networks, which HP acquired in 2008.

Wednesday, June 27, 2012

[OpenFlow] Summary of some current OpenFlow Related Articles


1. http://blog.ioshints.info/2011/11/openflow-enterprise-use-cases.html
This article discusses enterprise use cases in OpenFlow
  • There are four functions you can easily implement with OpenFlow (Tony Bourke wrote about them in more details)
    • packet filters – flow classifier followed by a drop or normal action
    • policy based routing – flow classifier followed by outgoing interface and/or VLAN tag push
    • static routes – flow classifiers using only destination IP prefix
    • NAT – some OpenFlow switches might support source/destination IP address/port rewrites.

2. http://routerjockey.com/2011/11/02/nec-and-programmableflow-switching/
This article writer give some information and comments about NEC programmableflow because he joined NEC presenting at Networking Tech Field Day 2


3. http://blog.ioshints.info/2011/11/openflow-deployment-models.html#of_native
This article provides four different models for OpenFlow deployment have already emerged:
  • Native OpenFlow
    • The controller performs all control-plane functions, including running control-plane protocols with the outside world. 
    • This model has at least two serious drawbacks even if we ignore the load placed on the controller by periodic control-plane protocols:
      • The switches need IP connectivity to the controller for the OpenFlow control session. 
      • Fast control loops like BFD are hard to implement with a central controller, more so if you want to have very fast response time.
  • Native OpenFlow with extensions
    • A switch controlled entirely by the OpenFlow controller could perform some of the low-level control-plane functions independently.
    • Using OpenFlow extensions or functionality implemented locally on the switch, you destroy the mirage of the “OpenFlow networking nirvana”-- smart open-source programmable controllers control dumb low-cost switches, busting the “networking = mainframes” model and bringing the Linux-like golden age to every network.
  • Ships in the night
    • Switches have traditional control plane; OpenFlow controller manages only certain ports or VLANs on trunked links. The local control plane (or linecards) can perform the tedious periodic tasks like running LACP, LLDP and BFD, passing only the link status to the OpenFlow controller.
  • Integrated OpenFlow
    •  OpenFlow classifiers and forwarding entries are integrated with the traditional control plane. For example, Juniper’s OpenFlow implementation inserts compatible flow entries (those that contain only destination IP address matching) as ephemeral static routes into RIB (Routing Information Base)
    •  From my perspective, this approach makes most sense: don’t rip-and-replace the existing network with a totally new control plane, but augment the existing well-known mechanisms with functionality that’s currently hard (or impossible) to implement.



Monday, April 2, 2012

Big Switch: the SDN Coffee Talks

Big Switch Networks Launches SDN Coffee Talks to Further Educate the
Broader IT Community on Software-Defined Networking (SDN)

the SDN Coffee Talks
http://www.bigswitch.com/sdn-coffee-talks/