Monday, May 19, 2014

[Ruby] Cross-Compile Ruby to MIPS platform

I have spent several days to deal with cross-compiling Ruby to MIPS platform and have encountered some problems that bother me for a while. Hopefully I finish all the problems and get work done. Awesome!
For the sake of avoiding these kind of problems, I give the steps and scripts about how to do it:

  • Cross Compile OpenSSL

>./config --prefix=$PWD/build --cross-compile-prefix=/home/liudanny/git/NL/toolchains_bin/mipscross/linux/bin/mips64-nlm-linux-elf32btsmip- shared no-asm

We need to check the Makefile with “PLATFORM=mips” and without “-m64”

>make 2>&1 | tee make.out; make install

  • Cross Compile zlib
>CC=your_cross_compile_gcc ./configure --prefix=$PWD/build
>make 2>&1 | tee make.out; make install


  • Cross Compile Berkeley DB

>CC=your_cross_compile_gcc ../dist/configure --prefix=$PWD/build
>make 2>&1 | tee make.out; make install


  • Cross Compile OpenLDAP

>CC=your_cross_compile_gcc LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/liudanny/git/db-6.0.30.NC/build_unix/build/lib:/home/liudanny/git/openssl-1.0.1e/build/ssl/lib LDFLAGS=" -L/home/liudanny/git/db-6.0.30.NC/build_unix/build/lib -L/home/liudanny/git/openssl-1.0.1e/build/ssl/lib" CPPFLAGS=" -I/usr/local/include -I/home/liudanny/git/db-6.0.30.NC/build_unix/build/include -I/home/liudanny/git/openssl-1.0.1e/build/ssl/include" ./configure --prefix=$PWD/mybuild/ --enable-bdb --enable-crypt --host=mips64-nlm-linux --with-yielding_select=yes –with-tls=openssl

Before calling make, commenting out the line in include/portable.h ?
// #define NEED_MEMCMP_REPLACEMENT 1

Modify build/shtool to avoid from stripping error ?
Goto line 980 and find:

if [ “.$opt_s” = .yes ]; then
    if [ “.$opt_t” = .yes ]; then
    echo “strip $dsttmp” 1>&2
    fi
    strip $dsttmp || shtool_exit $?
    fi
 
    Change to the following ?
 
    if [ “.$opt_s” = .yes ]; then
    if [ “.$opt_t” = .yes ]; then
    echo “arm-none-linux-gnueabi-strip $dsttmp” 1>&2
    fi
    arm-none-linux-gnueabi-strip $dsttmp || shtool_exit $?
    fi

>make depend; make; make install


  • Cross Compile Ruby 1.8.7

Before cross-compiling Ruby, it must compiles and builds on the server first, then cross-compiling will use its own Ruby to generate the Makefile and other important steps.
Due to Ruby can have many extension libraries, we need to modify Setup.emx to enable the ext libs that we need. And also, copy the needed ruby-ext-libs into ext/ directory In the picture, we add and enable “shadow”, “openssl”, “socket”, “zlib”, and “ldap” ext libs as follows:

>export ac_cv_func_getpgrp_void=yes
>export ac_cv_func_setpgrp_void=yes
>export PATH=/home/liudanny/ruby-1.8.7-p352/build/bin:$PATH
>CC=your_cross_compile_gcc ./configure --prefix=$PWD/build/ --host=mips64-nlm-linux --with-openssl-dir=/home/liudanny/git/openssl-1.0.1e/build --disable-ipv6 --with-openssl-dir=/home/liudanny/git/openssl-1.0.1e/build --with-zlib-dir=/home/liudanny/git/zlib-1.2.5/build --with-ldap-dir=/home/liudanny/git/openldap-2.4.39/mybuild 2>&1 | tee config.out
>make 2>&1 | tee make.out; make install

Friday, March 14, 2014

[Cross-Compile] What's the difference of `./configure` option `--build`, `--host` and `--target`?

When using ./configure especially in cross-compiling purpose, I kind of confuse about the option --build and --host so that the following content is what I found on searching:

some remarks on specifying --host=<host>, --target=<target> and --build=<build
# kindly provided by Keith Marshall:
# 1) build
# this is *always* the platform on which you are running the build
# process; since we are building on Linux, this is unequivocally going to
# specify `linux', with the canonical form being `i686-pc-linux-gnu'.
#
# 2) host
# this is a tricky one: it specifies the platform on which whatever we
# are building is going to be run; for the cross-compiler itself, that's
# also `i686-pc-linux-gnu', but when we get to the stage of building the
# runtime support libraries to go with that cross-compiler, they must
# contain code which will run on the `i686-pc-mingw32' host, so the `host'
# specification should change to this, for the `runtime' and `w32api'
# stages of the build.
#
# 3) target
# this is probably the one which causes the most confusion; it is only
# relevant when building a cross-compiler, and it specifies where the code
# which is built by that cross-compiler itself will ultimately run; it
# should not need to be specified at all, for the `runtime' or `w32api',
# since these are already targetted to `i686-pc-mingw32' by a correct
# `host' specification.

And I found an answer after posting this question.. Still posting it here in case it helps someone else in the future.
http://jingfenghanmax.blogspot.in/2010/09/configure-with-host-target-and-build.html

As per this blog in my case
build will be i686-pc-linux-gnu ( My PC)
host will be mipsel-linux ( The platform I am going to run my code on)
target will be used if I am building a cross-compiling toolchain.
Since I am not building a toolchain, I didnt have to specify target.

 You will have to cross-compile libusb and then copy the library and
header files to a location where your toolchain can locate them. In
the case of CodeSourcery, you can put them in
cs_root/arm-none-linux-gnueabi/lib and
cs_root/arm-none-linux-gnueabi/include for example. You will also need
the library on the target's root filesystem unless you link it
statically, please mind the licencing implications if you do though.

Wednesday, March 12, 2014

[NETCONF] The summary of NETCONF Content

The following content is about the summary of the NET-CONF web site: 
http://www.netconfcentral.org/netconf_docs

Session Initiation For Clients

   <?xml version="1.0" encoding="UTF-8"?>
          <hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
            <capabilities>
       <capability>urn:ietf:params:netconf:base:1.0</capability>
            </capabilities>
          </hello>]]>]]>

Protocol Capabilities

<capability>
    urn:ietf:params:netconf:capability:writable-running:1.0
</capability>

 <capability>urn:ietf:params:netconf:base:1.0</capability>

Standard Capabilities

:candidate
:confirmed-commit
:interleave
:notification
:partial-lock
:rollback-on-error
:startup
:url
:validate
:writable-running
:xpath

Configuration Databases

<running/>
<candidate/>
<startup/>

Protocol Operations

Once a NETCONF session is established, the client knows which capabilities the server supports. The client then can send RPC method requests and receive RPC replies from the server. The server's request queue is serialized, so requests will be processed in the order received.

OperationUsageDescription
close-session:baseTerminate this session
commit:base AND :candidateCommit the contents of the <candidate/> configuration database to the <running/> configuration database
copy-config:baseCopy a configuration database
create-subscription:notificationCreate a NETCONF notification subscription
delete-config:baseDelete a configuration database
discard-changes:base AND :candidateClear all changes from the <candidate/> configuration database and make it match the <running/> configuration database
edit-config:baseModify a configuration database
get:baseRetrieve data from the running configuration database and/or device statistics
get-config:baseRetrieve data from the running configuration database
kill-session:baseTerminate another session
lock:baseLock a configuration database so only my session can write
unlock:baseUnlock a configuration database so any session can write
validate:base AND :validateValidate the entire contents of a configuration database

Tuesday, February 25, 2014

[OpenStack] The resource list for studying OpenStack Neutron

The following list is about the resource documents for those who want to study OpenStack Neutron. It may reduce the time you spend on searching. Here you go:

Introduction to OpenStack Quantum
  • To warm up for jumping into the network world of OpenStack
OpenStack Admin Guide Chapter 7. Networking
  • It is very important to take a look at this offical document first

  • This contains the Quantum architecture in details
  • Cicso's Plugins are a little bit complicated because There are several versions those are different in the configuration and prerequisite.
Cisco Nexus Plug-in for OpenStack Neutron 
  • Give a data sheet to list the functionality and feature it supports
What's new Neutron?
  • Give a overall look of Neutron
  • Another doc: http://www.slideshare.net/kamesh001/whats-new-in-neutron-for-open-stack-havana
  • What components are in Neutron: http://www.slideshare.net/emaganap/open-stack-overview-meetups-oct-2013
  • It is very technical to explain the the detail of code and message.
  • we can see how Open vSwitch is implemented in Neutron 
  • It introduces VM booting workflow with Nova and Networking
  • Neurton deployment components
  • To explain why ML2 will come out
Modular Layer 2 in OpenStack Neutron
  • New Feature: ToR Switch Control
  • The video: http://www.youtube.com/watch?v=whmcQ-vHams

ML2
  • Modular Layer 2 (ML2) The Modular Layer 2 (ML2) plugin is a new OpenSource plugin to Neutron. This plugin is a framework allowing OpenStack Networking to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers. It currently works with the existing Open vSwitch, Linux Bridge, and L2 agents. The ML2 plugin supports local, flat, VLAN, GRE and VXLAN network types via a type drivers and different mechanism drivers.
OpenStack-Network-ML2


OVSDB User Guide
  • OpenDaylight has OVSDB project that is related with ML2 plugin
Cisco OpenStack Overview
  • Introduce Cisco Neutron Plugin
  • Cisco Virtual Switch: Nexus 1000v
  • Cisco UCSM

OpenStack RDO Deployment 
  1. http://blog.csdn.net/cloudtech/article/details/19936249
  2. http://blog.csdn.net/cloudtech/article/details/19936425
  3. http://blog.csdn.net/cloudtech/article/details/19936487
  • https://wiki.openstack.org/wiki/Arista-neutron-ml2-driver
  • Arista Related Information about Mechanism Driver
  • https://blueprints.launchpad.net/neutron/+spec/arista-ml2-mechanism-driver
  • http://www.bradreese.com/blog/4-1-2013-2.pdf
  • https://docs.google.com/document/d/1efFprzY69h-vaikRE8hoGQuLzOzVNtyVZZLa2GHbXLI/edit
Developer Guide

How to write a Neutron Plugin

Neutron/LBaaS/PluginDrivers

http://www.slideshare.net/MiguelLavalle/network-virtualization-with-open-stack-quantum


Monday, February 24, 2014

[Thoughts] Cumulus Networks and Big Switch Networks

These two companies, Cumulus Networks and Big Switch Networks, are two of  my most favorite network companies in the world because they have a strong technical skill and creative ability/thought to build their networking products. Unfortunately, they walk in two different paths and directions. Big Switch Networks is based on OpenFlow but Cumulus Networks is not. For more information in details, please see below:

http://www.jedelman.com/1/post/2014/02/big-switch-cumulus-and-openflow.html
http://vimeo.com/87216036

[SDN} SDN Migration Use Cases

This document provides three migration use cases and I think they are very useful for those who work in networking field and are interested in SDN and need to take a look at. Here you go:
http://www.businesswire.com/news/home/20140211005653/en/Open-Networking-Foundation-Publishes-Open-SDN-Migration

Tuesday, January 14, 2014

[Thoughts] RESTful control of switches

OpenFlow is already the standard Southbound API in SDN field, but OpenFlow is the one of the many SouthBound approaches. In SDN solution, we don't necessarily need to use OpenFlow protocol to control data plane. RESTful API is another way to control or configure switches ( data plane ) if they supports. Arista Networks has provides Arista eAPI as RESTful control of switches. For more information in details, please refer to this article: http://blog.sflow.com/2013/08/restful-control-of-switches.html

[LXC] How to use LXC?

At the first glimpse, I was amazed by its way to provide a lightweight container in virtual environment. With shell scripts combining, we can use these to build a convenient and powerful automation solution to test all kind of programs that need multiple virtual machines within a server host ( at least my focus is on the automation test...XD ). There are already a bunch of articles to introduce LXC. Here I only list some common use commands for reference quickly:

# Install LXC
sudo apt-get install lxc

# Create a Linux Container named base ( -t: template, -n: namespace )
sudo lxc-create -t ubuntu -n base

# Start the Linux Container ( -d: daemon )
sudo lxc-start -n base -d

# Stop the Linux Container
sudo lxc-stop -n base

# List Linux Containers
lxc-ls --fancy

# Clone the Linux Container
lxc-clone -o base -n newvm1

# Access the container
lxc-console -n newvm1

# Shudown
lxc-shutdown -n test-container

# Destroy
lxc-destroy -n test-container


LXC can be controlled via Libvirt:
http://blog.scottlowe.org/2013/11/27/linux-containers-via-lxc-and-libvirt/

Exploring LXC Networking:

Autostart
By default, containers will not be started after a reboot, even if they were running prior to the shutdown.
To make a container autostart, you simply need to symlink its config file into the /etc/lxc/auto directory:
ln -s /var/lib/lxc/test-container/config /etc/lxc/auto/test-container.conf

Reference:
https://www.digitalocean.com/community/articles/getting-started-with-lxc-on-an-ubuntu-13-04-vps
http://www.janoszen.com/2013/05/14/lxc-tutorial/