Thursday, March 24, 2016

[Ansible] My first step to use Ansible

Before get started to use Ansible, you need to add public ssh key to your remote Server first. If you want to setup SSH keys to allow logging in without a password, you can do so with a single command.
The first thing you’ll need to do is make sure you’ve run the keygen command to generate the keys:

ssh-keygen -t rsa
Then use this command to push the key to the remote server, modifying it to match your server name.
cat ~/.ssh/id_rsa.pub | ssh user@hostname 'cat >> .ssh/authorized_keys'

So, from now on you are able to try Ansible to control your remote server.

# sudo pip install ansible
# sudo mkdir /etc/ansible
# cd /etc/ansible/
# vim hosts
  ==> [my_vm]
          2 10.14.1.106

# ansible my_vm --private-key=/home/liudanny/.ssh/id_rsa --user=ubuntu -m ping
or 
# ansible my_vm -m ping --user ubuntu
10.14.1.106 | success >> {
    "changed": false,
    "ping": "pong"
}
ansible my_vm --user=ubuntu -a "/bin/echo hello"
10.14.1.106 | success | rc=0 >>
hello

If the above steps work fine, we can follow this document to create an instance and check services on OpenStack. Here you go:
http://superuser.openstack.org/articles/using-ansible-for-continuous-integration-on-openstack

Reference:
OpenStack-Ansible Installation Guide
http://docs.openstack.org/developer/openstack-ansible/install-guide/index.html

http://www.yet.org/2014/07/ansible/

http://rundeck.org/




Thursday, March 17, 2016

[LBaaS] The Load Balance as a Service trace records

A couple of days ago, my colleague imparts Load Balance as a Service (LBaaS) which is the Neutron Plugin to provide the load balancer functionality in OpenStack. Unavoidably, I still like to drill down how it works so that we won't only understand the surface of this function. This article is only focused on the trace record because I have studied the concept of LBaaS. For those who don't know about its concept and implementation, please check out other resources first, ex: https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary 


  • If created a lb pool ready, you can see something like the following picture. My point is to trace subnet and network port.




  • From the "subnet" link, we can trace back to the its detail and also can go to its network detail by clicking the link of network id.  




  • Here we can find the vip port that is for our load balancer as follows.

Click it to see its details.



  • Now, we will use the first part of port id (70081ac2) to trace what happens in linux network space and tun/tap interface.




  • LBaaS agent will create a linux network space and the naming rule is "qlbaas-" with the pool's id. 

# ip netns exec qlbaas-13185f35-3f75-47e7-9fd7-301be7b28e88 ifconfig
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

tap70081ac2-6f Link encap:Ethernet  HWaddr fa:16:3e:16:c7:69
          inet addr:192.168.111.60  Bcast:192.168.111.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe16:c769/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15963 errors:0 dropped:0 overruns:0 frame:0
          TX packets:15762 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:958766 (958.7 KB)  TX bytes:1060728 (1.0 MB)

# ip netns exec qlbaas-13185f35-3f75-47e7-9fd7-301be7b28e88 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.111.1   0.0.0.0         UG    0      0        0 tap70081ac2-6f
192.168.111.0   0.0.0.0         255.255.255.0   U     0      0        0 tap70081ac2-6f

  • The tap interface is ported to OVS bridge: br-int
# ovs-vsctl show | grep 7008
        Port "tap70081ac2-6f"
            tag: 1
            Interface "tap70081ac2-6f"
                type: internal


  • I didn't cover the HAProxy software because my point is only on tun/tap interface and Linux network space. But, how do I find the HAProxy process running on this network space?

# netns=qlbaas-13185f35-3f75-47e7-9fd7-301be7b28e88
# find -L /proc/[1-9]*/task/*/ns/net -samefile /run/netns/"$netns" | cut -d/ -f5
19937 <== the process id

# ps aux | grep 19937
root     14216  0.0  0.0  10432   932 pts/0    S+   02:29   0:00 grep --color=auto 19937
nobody   19937  0.0  0.0  29176  1472 ?        Ss   Mar16   0:06 haproxy -f /var/lib/neutron/lbaas/13185f35-3f75-47e7-9fd7-301be7b28e88/conf -p /var/lib/neutron/lbaas/13185f35-3f75-47e7-9fd7-301be7b28e88/pid -sf 8433

# ip netns identify 19937
qlbaas-13185f35-3f75-47e7-9fd7-301be7b28e88 <== where the namespace the process id is in

Get it. Here we go. So, put all the information together and then we can more understand how LBaaS implements.

Tuesday, March 1, 2016

[Fuel] How to use postgres database in Fuel

For those who wants to check out the data in Fuel's Postgres database, this document can give a simple guide to do so for the reference.
Here it is:

Find out the postgres docker

[root@fuel /]# docker ps


[root@fuel /]dockerctl fuel-core-7.0-psotgres shell
[root@fuel /]# sudo su - postgres
-bash-4.1$ psql
psql (9.3.5)
Type "help" for help.

postgres=#

So, now we can use postgres sql database!

Use "nailgun" database

postgres=# \c nailgun
You are now connected to database "nailgun" as user "postgres".

List all tables in database

nailgun=# \dt

Look "tasks" table schema

nailgun=# \d tasks       
           

Use SQL to select the data in table

nailgun=# select * from information_schema.columns where table_name = tasks;



Friday, February 26, 2016

[Fuel] To build up Fuel Nailgun developing environment

  Well, I try to build up Fuel Nail Gun developing environment on my Debian 7 virtualbox VM. All my reference document is this nailgun.
  Although this document seems to use Ubuntu 12.04/14.04, but I still insist to build on Debian 7 due to my personal persistence. Here we go:

  • Install nailgun developing environment
sudo apt-get install --yes postgresql postgresql-server-dev-all
sudo sed -ir 's/peer/trust/' /etc/postgresql/9.1/main/pg_hba.conf
sudo service postgresql restart
sudo -u postgres psql -c "CREATE ROLE nailgun WITH LOGIN PASSWORD 'nailgun'"
sudo -u postgres createdb nailgun

sudo apt-get install --yes python-dev python-pip
sudo pip install virtvualenv virtualewrapper
. /usr/local/bin/virtualenvwrapper.sh  # you can save this to .bashrc
mkvirtualenv fuel # you can use any name instead of 'fuel'
workon fuel  # command selects the particular environment

sudo apt-get install --yes git
git clone https://github.com/openstack/fuel-web.git
cd fuel-web
pip install --allow-all-external -r nailgun/test-requirements.txt
cd nailgun
python setup.py develop
sudo mkdir /var/log/nailgun
sudo chown -R `whoami`.`whoami` /var/log/nailgun
sudo chmod -R a+w /var/log/nailgun

sudo apt-get remove --yes nodejs nodejs-legacy
sudo apt-get install --yes software-properties-common
sudo add-apt-repository --yes ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install --yes nodejs

# This is only for Debian to do installing NPM
git clone https://github.com/joyent/node.git
cd node
#Now, if you require a specific version of Node:
git tag # Gives you a list of released versions
git checkout v0.4.12
# Then compile and install Node like this:
./configure
make
sudo make install
cd ~/fuel-web
sudo npm install -g gulp sudo chown -R `whoami`.`whoami` ~/.npm cd nailgun
# To install dependency packages for using Fuel UI
# Then compile and install Node like this:
npm install

  • Start the nailgun environment
. /usr/local/bin/virtualenvwrapper.sh
workon fuel
cd fuel-web/nailgun
./manage.py syncdb
./manage.py loaddefault # It loads all basic fixtures listed in settings.yaml
./manage.py loaddata nailgun/fixtures/sample_environment.json  # Loads fake nodes

python manage.py run -p 8000 --fake-tasks

now, we can login Fuel UI by http://localhost:8000



  • Install the Fuel8.0 client
Use Fuel API to program your installation tool 

  • Other reference URLs:
Using Fuel CLI
https://docs.mirantis.com/openstack/fuel/fuel-8.0/user-guide.html#using-fuel-cli

Fuel Plugin Catalog
https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins/

Fuel Reference Architecture
https://docs.mirantis.com/openstack/fuel/fuel-7.0/reference-architecture.html

fuel-specs/specs/8.0/
https://github.com/openstack/fuel-specs/tree/master/specs/8.0

OpenStack/fuel-dev-tools
https://github.com/openstack/fuel-dev-tools

http://www.yet.org/2015/10/mos7-reducedfootprint/

[Networking] The excerpt about Jumbo Frame, MTU size and RX drop

This document is the excerpt about my survey of Jumbo Frame, MTU size and RX drop so that it may looks disorganized.
I can give my quick conclusion for my study:
( my lab is 10G network with supporting Jumbo Frame )

server1 <-----> Switch1 <-----> Switch2 <-----> server2
mtu:1500        mtu:9000        mtu:1500        mtu:1500    ==> only less then or equal mtu:1500 can pass through

server1 <-----> Switch1 <-----> Switch2 <-----> server2
mtu:1500        mtu:9000        mtu:9000        mtu:1500    ==> all mtu between 1500 and 9000 can pass through

P.S: the setting mtu to server/host doesn't affect their's mru (maximum receive units). So they can receive mtu 9000 packets even though mtu is 1500

============ The following are the excerpt ===============
https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk61922
  • 'ifconfig -a' shows excessive RX errors.
  • 'ethtool -S interface_name' command shows RX errors in rx_no_buffer_count.
  • 'top' command shows high SoftIRQ.
  1. Positive values in RX-ERR counter mean that the NIC received malformed Ethernet frames from the transmitting switch port, and data integrity could not be validated during frame's cyclic redundancy check (CRC) . The root cause of this is usually a bad cable, or a bad interface on either the machine or the switch.
  1. NIC speed / duplex mis-match with the connecting port on the switch/router.
  1. High or critical performance rated IPS protections are set to Prevent or Detect.
  1. High number of rules in Security and/or in NAT policy.
  • Softnet backlog full
  • Bad / Unintended VLAN tags
  • Unknown / Unregistered protocols
  • IPv6 frames when the server is not configured for IPv6
  1. When M1 sends a frame with a 600-byte payload to M2, there would be no problem.
  1. When M2 sends a frame with a 1200-byte payload to M1, there still would be no problem. Why not? Because setting M1's MTU didn't necessarily change its MRU, and in my experience MTUs and MRUs are separate, and implementations don't give you a way to change your MRU. So M1's MRU on that interface would be 1500 since it's Ethernet.
  1. Router wouldn't know it needs to fragment the frames from M2, because it believes all hosts on the Ethernet LAN that M1 is on are able to receive frames with 1200-byte payloads, because it was configured for a 1200-byte MTU on that interface. Luckily this would still probably work out fine, as I discussed in (2).

http://lime-technology.com/forum/index.php?topic=19848.0http://forums.fedoraforum.org/showthread.php?t=297243
softnet backlog full (accounted in /proc/net/softnet_stat)Issuing a modprobe 8021q resolves the issue.Not sure if this will affect the flow control problem or not but will check that out.

http://serverfault.com/questions/528290/ifconfig-eth0-rx-dropped-packets    
# ethtool -g eth0

http://superuser.com/questions/270489/about-mtu-settings-in-machines-and-switchhttp://www.slideshare.net/raydelott/mirantis-openstack-and-commodity-hardware

Change the MTU of a network interface

How to test if 9000 MTU/Jumbo Frames are working
ping -M do -s 8972 [destinationIP]



Symptoms
Cause
RX Errors are typically caused by one or more of the following:

Try a new cable.

Solved the problem with the DHCP request packets coming from the wireless network. They must have been tagged with a VLAN id, and I did not have the 8021q module loaded in the kernel. Recent changes to the kernel now increment the rx_dropped counter under the following conditions:

- bad vlan tag (not accounted)
- unknown/unregistered protocol (not accounted)




Beginning with kernel 2.6.37, it has been changed the meaning of dropped packet count. Before, dropped packets was most likely due to an error. Now, the rx_dropped counter shows statistics for dropped frames because of:
[...]
If the rx_dropped counter stops incrementing while tcpdump is running; then it is more than likely showing drops because of the reasons listed earlier.

    Ring parameters for eth0:
    Pre-set maximums:
    RX:             1020
    RX Mini:        0
    RX Jumbo:       16320
    TX:             255
    Current hardware settings:
    RX:             255
    RX Mini:        0
    RX Jumbo:       0
    TX:             255
    To increase the RX ring buffer to 4080 you would run "ethtool -G eth0 rx 1020".


About MTU settings in machines and switch

http://superuser.com/questions/270489/about-mtu-settings-in-machines-and-switch
You didn't specify what networking technologies you were talking about, so I'm going to assume Ethernet and IP[v4].
Ethernet has always defined its range of acceptable payload lengths to be from 46 to 1500 bytes, and requires all devices (hosts and switches) on the LAN to be able to receive frames with 1500-byte payloads. Because of this, Ethernet does not provide a fragmentation mechanism, nor does it provide a mechanism for communicating or negotiating MTUs (or, more importantly, MRUs -- Maximum Receive Units) between devices. In fact the term "MTU" or "maximum transmission unit" does not appear anywhere in the IEEE 802.3 specification.
So let's add IP into the picture. IP has a concept of an MTU, and most modern IP stacks let you set MTUs on a per-interface basis (and more). But your question as stated doesn't quite work out in the context of IP either, because IP has a minimum MTU of 576. So allow me to restate your question as "M1 has an MTU of 600, and M2 has an MTU of 1200". But what MTU shall we say that "Switch" has? Well, if Switch is just a Layer 2 Ethernet switch, it doesn't have a concept of a settable MTU. So to make your question work out in the context of IP, we'll have to turn that switch into a router. So let's call it "Router" and say it has two Ethernet interfaces, one attached to M1 and one attached to M2. Let's also say it has MTUs of 1200 set on both of its interfaces.
Okay, still trying to find and answer the true spirit of your question, let's say the link between M1 and Router is actually PPP instead of Ethernet. The PPP protocol allows hosts to communicate/negotiate their MRUs. Let's say that M1 told Router that M1 has a 600-byte MRU limitation, so Router has set its MTU for that link to 600 bytes.
Now, in this case, if M2 sends a 1200-byte IP datagram to M1 (without setting the "Don't Fragment" bit in the IP header), Router will receive it just fine, and realize it needs to fragment it to send it to M1. So does Router fragment it into two 600-byte fragments? Well, no, it's not that simple for a couple reasons.
One reason is that every fragment has to have its own IP header, which adds 20 bytes to the size of each fragment after the first. The other reason is that IP's fragmentation offset field counts in 8-byte chunks instead of individual bytes.
So let's say the 1200-byte datagram was specifically 1172 bytes of application data in a UDP datagram (8 bytes of UDP headers, 20 bytes of IP headers). After fragmentation, the first fragment would contain a 20-byte IP header, the 8-byte UDP header, and the first 568 bytes of the application data, for a total of 586 bytes. The second frame would contain another 20-byte IP header, no UDP header, and the next 576 bytes of the application data, for a total of 586 bytes. That leaves 28 bytes of application data left over for the final fragment, which, with its IP header added, would be 48 bytes.
Update based on Kavin's update that he was talking about Jumbo frames:
Jumbo frames are something that some Gigabit Ethernet product vendors created independently around the time GigE was created, and it was (I believe) subsequently rejected or ignored by the IEEE and seems unlikely to ever become part of the 802.3 Ethernet standard. Even IEEE 802.3-2008 which includes not just 1000BASE-T but 10GBASE-T, does not contain anything about 9000-byte frame payloads.
The vendors that came up with jumbo frames did not provide any kind of autonegotiation or communication mechanism for jumbo frame support, nor did they create an Ethernet-layer fragmentation method to handle the (very common) case you illustrated. If you want to run your Ethernet LAN in this nonstandard mode, you have to ensure that all hosts and switches on your LAN support jumbo frames.
If M1's NIC is not capable of receiving jumbo frames, it will consider a jumbo frame to be "Ethernet jabber" -- a broken device that "keeps jabbering on and on"; keeps sending bits well beyond the end of a maximum allowable 1500 (really 1518) -byte frame. Note that this meaning of jabber is a term for a kind of Ethernet malfunction and is not to be confused with the similarly-named "Jabber" Internet chat system. You'll have to decide if you want to stop using jumbo frames on this network, or if you want to upgrade M1 to have a NIC that supports jumbo frames.
If M1's NIC is capable of receiving jumbo frames, I suspect that setting its IPv4 MTU for that interface down to 1500 will ensure it doesn't transmit any jumbo sized IP datagrams in a single jumbo Ethernet frame, but it will most likely be able to receive large IP datagrams in single jumbo Ethernet frames no problem, because again, MTU is not MRU, and setting an IP-layer MTU doesn't affect what size frame buffers the NIC allows. Now, if you're tweaking a NIC/driver setting to tell the NIC to only use 1500-byte buffers instead of 9000-byte buffers, that's an Ethernet-layer change, and would probably make your NIC act as if it didn't support 9000-byte buffers.
iperf was utilized to test intervm network performance ‣ By default the HV and all backend network devices had an MTU set at 9000, the MTU on the default ubuntu 14.04 cloud image is set to 1500, we ran tests using both 1500 byte and 9000 byte MTU’s on the the guest. ‣ 1500 Byte MTU: 5Gigabit per instance/HV (25% of theoretical max) ‣ 9000 Byte MTU: 16Gigabit per instance/HV (80% of theoretical max) ‣ Latency ~.5ms average guest<—>guest ‣ Traffic was appropriately divided by node count, whereas one node would use 80% of the theoretical max, 10 nodes would each use 8% of the theoretical max, 5 nodes @ 16%, etc.


http://www.microhowto.info/howto/change_the_mtu_of_a_network_interface.html

http://www.mylesgray.com/hardware/test-jumbo-frames-working/

ip link set dev ethx mtu 9000
Use ethtool -i to check if NIC supports Jumbo Frame