Tuesday, January 26, 2016

[Neutron] The Neutron Networking trace records

This article maybe is not given too much explanation or description about the script and result is because it is my trace record for my study from my OpenStack environment, which is based on the following references:
https://www.rdoproject.org/networking/networking-in-too-much-detail/
https://blogs.oracle.com/ronen/entry/diving_into_openstack_network_architecture1
https://www.gitbook.com/book/yeasy/openstack_understand_neutron/details

This picture is quite important because it tells the neutron networking architecture clearly.


So, I have a VM/instance, which is "mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4" and we can get

> nova --os-tenant-name danny list
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------------------------------+
| ID                                   | Name                                                  | Status | Task State | Power State | Networks                                   |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------------------------------+
| aa2f621c-65e0-4d89-bb6d-c66054ee9250 | mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4 | ACTIVE | -          | Running     | default_network=192.168.100.1, 10.14.1.154 |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------------------------------+

The instance id is aa2f621c-65e0-4d89-bb6d-c66054ee9250

root@node-6:~# nova show aa2f621c-65e0-4d89-bb6d-c66054ee9250
+--------------------------------------+---------------------------------------------------------------+
| Property                             | Value                                                         |
+--------------------------------------+---------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                          |
| OS-EXT-SRV-ATTR:host                 | node-5.domain.tld                                             |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | node-5.domain.tld                                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000630                                             |
| OS-EXT-STS:power_state               | 1                                                             |
| OS-EXT-STS:task_state                | -                                                             |
| OS-EXT-STS:vm_state                  | active                                                        |
| OS-SRV-USG:launched_at               | 2016-01-25T06:17:13.000000                                    |
| OS-SRV-USG:terminated_at             | -                                                             |
| accessIPv4                           |                                                               |
| accessIPv6                           |                                                               |
| config_drive                         |                                                               |
| created                              | 2016-01-25T06:17:05Z                                          |
| default_network network              | 192.168.100.1, 10.14.1.154                                    |
| flavor                               | 1core2GBmemory20GBdisk (06d3aafd-8819-4034-a20d-2a4d2340bae0) |
| hostId                               | 8deeb163a8215e6d14e89b44189791aa699211a697d625b85a82334a      |
| id                                   | aa2f621c-65e0-4d89-bb6d-c66054ee9250                          |
| image                                | Ubuntu14.04-2015.3.0 (0d95c5a8-ea4e-45a4-ba79-6b1c8f2acf46)   |
| key_name                             | 2d8ed97974a34782ba3c1eda2cc1f705                              |
| metadata                             | {}                                                            |
| name                                 | mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4         |
| os-extended-volumes:volumes_attached | []                                                            |
| progress                             | 0                                                             |
| security_groups                      | default_sg                                                    |
| status                               | ACTIVE                                                        |
| tenant_id                            | fc48558ea8684d14a1da30f6c5028064                              |
| updated                              | 2016-01-25T06:24:45Z                                          |
| user_id                              | 2d8ed97974a34782ba3c1eda2cc1f705                              |
+--------------------------------------+---------------------------------------------------------------+


$ nova-manage vm list | grep mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4
mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4 node-5.domain.tld 1core2GBmemory20GBdisk active     2016-01-25 06:17:13        0d95c5a8-ea4e-45a4-ba79-6b1c8f2acf46                     fc48558ea8684d14a1da30f6c5028064 2d8ed97974a34782ba3c1eda2cc1f705 nova       0

From now on, we can go through the neutron networking on Compute and Neutron/Service host

Compute Node:

Find the related interface and bridge that belongs to our VM

go to instance folder
$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# grep -i tap libvirt.xml
      <target dev="tape7b56cc3-8f"/>

$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# iptables -S | grep tape7b56cc3-8f
-A neutron-openvswi-FORWARD -m physdev --physdev-out tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-sg-chain
-A neutron-openvswi-FORWARD -m physdev --physdev-in tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-sg-chain
-A neutron-openvswi-INPUT -m physdev --physdev-in tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-oe7b56cc3-8
-A neutron-openvswi-sg-chain -m physdev --physdev-out tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-ie7b56cc3-8
-A neutron-openvswi-sg-chain -m physdev --physdev-in tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-oe7b56cc3-8

Apply the security group in Linux Bridge
$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# iptables -L neutron-openvswi-sg-chain | grep tape7b56cc3-8f
neutron-openvswi-ie7b56cc3-8  all  --  anywhere             anywhere             PHYSDEV match --physdev-out tape7b56cc3-8f --physdev-is-bridged
neutron-openvswi-oe7b56cc3-8  all  --  anywhere             anywhere             PHYSDEV match --physdev-in tape7b56cc3-8f --physdev-is-bridged


root@node-5:~# brctl show
....
qbre7b56cc3-8f          8000.1ab45f3f1100       no              qvbe7b56cc3-8f
                                                        tape7b56cc3-8f
....


$ ovs-vsctl show
    ....
        Port "qvoe7b56cc3-8f"
            tag: 93
            Interface "qvoe7b56cc3-8f"
    Bridge br-prv
        Port br-prv
            Interface br-prv
                type: internal
        Port "p_br-prv-0"
            Interface "p_br-prv-0"
                type: internal
        Port phy-br-prv
            Interface phy-br-prv
                type: patch
                options: {peer=int-br-prv}
    Bridge br-floating
        Port "p_br-floating-0"
            Interface "p_br-floating-0"
                type: internal
        Port br-floating
            Interface br-floating
                type: internal
    ovs_version: "2.3.1"


or here is another approach:

Use virsh command:
# virsh list
# virsh domiflist <your instance>

Screen Shot 2016-03-18 at 2.02.49 PM.png

Find the bridge that VM's interface is connected
# brctl show qbr10257204-b0
Screen Shot 2016-03-18 at 2.04.35 PM.png

Find veth pairs
# ethtool -S qvb10257204-b0
# ip link list |grep ‘41: ‘
Screen Shot 2016-03-18 at 2.08.01 PM.png

Screen Shot 2016-03-18 at 2.09.36 PM.png

OVS flow tables

$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# ovs-ofctl dump-flows br-floating
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=2944114.894s, table=0, n_packets=5330801, n_bytes=454517092, idle_age=0, hard_age=65534, priority=0 actions=NORMAL

$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# ovs-ofctl dump-flows br-prv
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=2943385.989s, table=0, n_packets=54475876, n_bytes=21932146773, idle_age=0, hard_age=65534, priority=1 actions=NORMAL
 cookie=0x0, duration=1212275.443s, table=0, n_packets=119001, n_bytes=9280541, idle_age=1, hard_age=65534, priority=4,in_port=2,dl_vlan=53 actions=mod_vlan_vid:289,NORMAL
 cookie=0x0, duration=2939213.555s, table=0, n_packets=40203246, n_bytes=7594046533, idle_age=0, hard_age=65534, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:400,NORMAL
 cookie=0x0, duration=443296.428s, table=0, n_packets=17855, n_bytes=1651227, idle_age=27, hard_age=65534, priority=4,in_port=2,dl_vlan=78 actions=mod_vlan_vid:388,NORMAL
 cookie=0x0, duration=440574.253s, table=0, n_packets=90232, n_bytes=6655951, idle_age=14, hard_age=65534, priority=4,in_port=2,dl_vlan=81 actions=mod_vlan_vid:361,NORMAL
 cookie=0x0, duration=430464.292s, table=0, n_packets=97628, n_bytes=7077738, idle_age=1, hard_age=65534, priority=4,in_port=2,dl_vlan=87 actions=mod_vlan_vid:309,NORMAL
 cookie=0x0, duration=286613.370s, table=0, n_packets=74230, n_bytes=5428601, idle_age=1, hard_age=65534, priority=4,in_port=2,dl_vlan=91 actions=mod_vlan_vid:321,NORMAL
 cookie=0x0, duration=188.222s, table=0, n_packets=276, n_bytes=25628, idle_age=0, priority=4,in_port=2,dl_vlan=97 actions=mod_vlan_vid:374,NORMAL
 cookie=0x0, duration=8355.066s, table=0, n_packets=2228, n_bytes=178873, idle_age=85, priority=4,in_port=2,dl_vlan=93 actions=mod_vlan_vid:255,NORMAL
 cookie=0x0, duration=2869178.234s, table=0, n_packets=1063994, n_bytes=83782069, idle_age=19, hard_age=65534, priority=4,in_port=2,dl_vlan=2 actions=mod_vlan_vid:206,NORMAL
 cookie=0x0, duration=1148558.787s, table=0, n_packets=41929, n_bytes=4005468, idle_age=14, hard_age=65534, priority=4,in_port=2,dl_vlan=55 actions=mod_vlan_vid:295,NORMAL
 cookie=0x0, duration=436450.974s, table=0, n_packets=268258, n_bytes=26273333, idle_age=0, hard_age=65534, priority=4,in_port=2,dl_vlan=86 actions=mod_vlan_vid:305,NORMAL
 cookie=0x0, duration=1134136.903s, table=0, n_packets=41386, n_bytes=3955238, idle_age=107, hard_age=65534, priority=4,in_port=2,dl_vlan=57 actions=mod_vlan_vid:274,NORMAL
 cookie=0x0, duration=349629.927s, table=0, n_packets=12556, n_bytes=1235546, idle_age=4, hard_age=65534, priority=4,in_port=2,dl_vlan=88 actions=mod_vlan_vid:325,NORMAL
 cookie=0x0, duration=956159.621s, table=0, n_packets=1414006, n_bytes=2522828496, idle_age=6, hard_age=65534, priority=4,in_port=2,dl_vlan=68 actions=mod_vlan_vid:383,NORMAL
 cookie=0x0, duration=444184.489s, table=0, n_packets=338, n_bytes=37368, idle_age=65534, hard_age=65534, priority=4,in_port=2,dl_vlan=76 actions=mod_vlan_vid:212,NORMAL
 cookie=0x0, duration=341208.593s, table=0, n_packets=4535, n_bytes=806468, idle_age=104, hard_age=65534, priority=4,in_port=2,dl_vlan=89 actions=mod_vlan_vid:307,NORMAL
 cookie=0x0, duration=1046690.998s, table=0, n_packets=341, n_bytes=37606, idle_age=65534, hard_age=65534, priority=4,in_port=2,dl_vlan=63 actions=mod_vlan_vid:247,NORMAL
 cookie=0x0, duration=1469822.661s, table=0, n_packets=54812, n_bytes=5159092, idle_age=83, hard_age=65534, priority=4,in_port=2,dl_vlan=47 actions=mod_vlan_vid:324,NORMAL
 cookie=0x0, duration=609252.595s, table=0, n_packets=473, n_bytes=403134, idle_age=65534, hard_age=65534, priority=4,in_port=2,dl_vlan=72 actions=mod_vlan_vid:277,NORMAL
 cookie=0x0, duration=2943385.550s, table=0, n_packets=5948, n_bytes=493768, idle_age=189, hard_age=65534, priority=2,in_port=2 actions=drop

$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=2943396.106s, table=0, n_packets=64104253, n_bytes=14989605175, idle_age=0, hard_age=65534, priority=1 actions=NORMAL
 cookie=0x0, duration=1134146.751s, table=0, n_packets=51173, n_bytes=39495387, idle_age=1, hard_age=65534, priority=3,in_port=1,dl_vlan=274 actions=mod_vlan_vid:57,NORMAL
 cookie=0x0, duration=443306.272s, table=0, n_packets=22067, n_bytes=18326623, idle_age=37, hard_age=65534, priority=3,in_port=1,dl_vlan=388 actions=mod_vlan_vid:78,NORMAL
 cookie=0x0, duration=1046700.841s, table=0, n_packets=244, n_bytes=37459, idle_age=65534, hard_age=65534, priority=3,in_port=1,dl_vlan=247 actions=mod_vlan_vid:63,NORMAL
 cookie=0x0, duration=8364.913s, table=0, n_packets=2939, n_bytes=3336868, idle_age=9, priority=3,in_port=1,dl_vlan=255 actions=mod_vlan_vid:93,NORMAL
 cookie=0x0, duration=2869188.084s, table=0, n_packets=1110068, n_bytes=287840569, idle_age=1, hard_age=65534, priority=3,in_port=1,dl_vlan=206 actions=mod_vlan_vid:2,NORMAL
 cookie=0x0, duration=1148568.632s, table=0, n_packets=51652, n_bytes=39880915, idle_age=24, hard_age=65534, priority=3,in_port=1,dl_vlan=295 actions=mod_vlan_vid:55,NORMAL
 cookie=0x0, duration=609262.437s, table=0, n_packets=4995, n_bytes=345118, idle_age=14, hard_age=65534, priority=3,in_port=1,dl_vlan=277 actions=mod_vlan_vid:72,NORMAL
 cookie=0x0, duration=1212285.287s, table=0, n_packets=127910, n_bytes=47848009, idle_age=5, hard_age=65534, priority=3,in_port=1,dl_vlan=289 actions=mod_vlan_vid:53,NORMAL
 cookie=0x0, duration=1469832.507s, table=0, n_packets=66919, n_bytes=50958284, idle_age=93, hard_age=65534, priority=3,in_port=1,dl_vlan=324 actions=mod_vlan_vid:47,NORMAL
 cookie=0x0, duration=430474.139s, table=0, n_packets=101531, n_bytes=22514877, idle_age=8, hard_age=65534, priority=3,in_port=1,dl_vlan=309 actions=mod_vlan_vid:87,NORMAL
 cookie=0x0, duration=2939223.400s, table=0, n_packets=39197886, n_bytes=15878291311, idle_age=1, hard_age=65534, priority=3,in_port=1,dl_vlan=400 actions=mod_vlan_vid:1,NORMAL
 cookie=0x0, duration=440584.097s, table=0, n_packets=94708, n_bytes=24882334, idle_age=1, hard_age=65534, priority=3,in_port=1,dl_vlan=361 actions=mod_vlan_vid:81,NORMAL
 cookie=0x0, duration=349639.771s, table=0, n_packets=15123, n_bytes=10862299, idle_age=14, hard_age=65534, priority=3,in_port=1,dl_vlan=325 actions=mod_vlan_vid:88,NORMAL
 cookie=0x0, duration=341218.441s, table=0, n_packets=4854, n_bytes=1123080, idle_age=113, hard_age=65534, priority=3,in_port=1,dl_vlan=307 actions=mod_vlan_vid:89,NORMAL
 cookie=0x0, duration=956169.465s, table=0, n_packets=1976310, n_bytes=195788064, idle_age=2, hard_age=65534, priority=3,in_port=1,dl_vlan=383 actions=mod_vlan_vid:68,NORMAL
 cookie=0x0, duration=444194.333s, table=0, n_packets=251, n_bytes=37638, idle_age=65534, hard_age=65534, priority=3,in_port=1,dl_vlan=212 actions=mod_vlan_vid:76,NORMAL
 cookie=0x0, duration=436460.818s, table=0, n_packets=448207, n_bytes=582425969, idle_age=0, hard_age=65534, priority=3,in_port=1,dl_vlan=305 actions=mod_vlan_vid:86,NORMAL
 cookie=0x0, duration=198.065s, table=0, n_packets=71, n_bytes=13149, idle_age=4, priority=3,in_port=1,dl_vlan=374 actions=mod_vlan_vid:97,NORMAL
 cookie=0x0, duration=286623.215s, table=0, n_packets=76707, n_bytes=17851968, idle_age=6, hard_age=65534, priority=3,in_port=1,dl_vlan=321 actions=mod_vlan_vid:91,NORMAL
 cookie=0x0, duration=2943395.505s, table=0, n_packets=125949, n_bytes=9745756, idle_age=17, hard_age=65534, priority=2,in_port=1 actions=drop
 cookie=0x0, duration=2943396.052s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop

VLAN Translation

root@node-5:~# ovs-ofctl dump-flows br-int | grep 255
 cookie=0x0, duration=97049.752s, table=0, n_packets=7155, n_bytes=6796405, idle_age=61, hard_age=65534, priority=3,in_port=1,dl_vlan=255 actions=mod_vlan_vid:93,NORMAL

From the external vlan (segmentation id ) 255 to internal vlan 93

root@node-5:~# ovs-ofctl dump-flows br-prv | grep 255
 cookie=0x0, duration=97125.859s, table=0, n_packets=5278, n_bytes=475415, idle_age=40, hard_age=65534, priority=4,in_port=2,dl_vlan=93 actions=mod_vlan_vid:255,NORMAL

From the internal vlan 93 to external vlan (segmentation id ) 255

P.S: how to get vlan id please refer to the section "To find the vlan id for my tenant/group"  below


Use command to show of-port number:
ovs-ofctl show <ovs-bridge>

Use command to show vlan-tag on port:
ovs-vsctl show

Network Node:

Router List

root@node-6: ~# neutron --OS-tenant-name Danny router-list
+--------------------------------------+-------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
id                                   | name                                      | external_gateway_info                                                                                                                                                                   |
+--------------------------------------+-------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 6a222d9d-71da-4db6-891b-87d4b6ee8536 | default_network-admin-router-rcg7xvxhll2i | {"network_id": "7cd5fc6c-e47a-420c-8d15-aa51747564d8", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "8e2aa50c-cd0e-4596-97fb-dfc1ecc63245", "ip_address": "10.14.1.153"}]} |
+--------------------------------------+-------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+


root@node-6:~# ip net exec qrouter-6a222d9d-71da-4db6-891b-87d4b6ee8536 ip route
default via 10.14.0.254 dev qg-7853deb4-04
10.14.0.0/16 dev qg-7853deb4-04  proto kernel  scope link  src 10.14.1.153
192.168.100.0/24 dev qr-2d7bdc99-90  proto kernel  scope link  src 192.168.100.254


root@node-6:~# ip net exec qrouter-6a222d9d-71da-4db6-891b-87d4b6ee8536 iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat
-N neutron-l3-agent-snat
-N neutron-postrouting-bottom
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-OUTPUT -d 10.14.1.154/32 -j DNAT --to-destination 192.168.100.1
-A neutron-l3-agent-POSTROUTING ! -i qg-7853deb4-04 ! -o qg-7853deb4-04 -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8775
-A neutron-l3-agent-PREROUTING -d 10.14.1.154/32 -j DNAT --to-destination 192.168.100.1
-A neutron-l3-agent-float-snat -s 192.168.100.1/32 -j SNAT --to-source 10.14.1.154
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -s 192.168.100.0/24 -j SNAT --to-source 10.14.1.153
-A neutron-postrouting-bottom -j neutron-l3-agent-snat

P.S: The VM's floating IP is 10.14.1.154

Network List

root@node-6:~# neutron --os-tenant-name danny net-list
+--------------------------------------+-----------------+-------------------------------------------------------+
id                                   | name            | subnets                                               |
+--------------------------------------+-----------------+-------------------------------------------------------+
| 16b6d042-ce8b-4020-82e0-86a6829a3978 | default_network | 7d5eafc1-2c10-410d-a8e0-1dc648969fcd 192.168.100.0/24 |
| 7cd5fc6c-e47a-420c-8d15-aa51747564d8 | net04_ext       | 8e2aa50c-cd0e-4596-97fb-dfc1ecc63245                  |
+--------------------------------------+-----------------+-------------------------------------------------------+


Responsible network node for DHCP and L3

You can obtain this now that you’ve got the network’s UUID by doing the following:
root@node-6:~# neutron dhcp-agent-list-hosting-net 16b6d042-ce8b-4020-82e0-86a6829a3978
+--------------------------------------+-------------------+----------------+-------+
| id                                   | host              | admin_state_up | alive |
+--------------------------------------+-------------------+----------------+-------+
| 34263976-2f1b-48bd-a40e-eb6f0e77c5f4 | node-6.domain.tld | True           | :-)   |
+--------------------------------------+-------------------+----------------+-------+

To find the vlan id for my tenant/group

root@node-6:~# neutron net-show -F provider:segmentation_id 16b6d042-ce8b-4020-82e0-86a6829a3978
+--------------------------+-------+
| Field                    | Value |
+--------------------------+-------+
| provider:segmentation_id | 255   |
+--------------------------+-------+

VLAN Translation

root@node-6:~# ovs-ofctl dump-flows br-prv | grep 255
 cookie=0x0, duration=97539.474s, table=0, n_packets=7188, n_bytes=6770236, idle_age=65, hard_age=65534, priority=4,in_port=2,dl_vlan=330 actions=mod_vlan_vid:255,NORMAL

root@node-6:~# ovs-ofctl dump-flows br-int | grep 255
 cookie=0x0, duration=97562.780s, table=0, n_packets=5264, n_bytes=516743, idle_age=89, hard_age=65534, priority=3,in_port=1,dl_vlan=255 actions=mod_vlan_vid:330,NORMAL

To find the dhcp namesapce

root@node-6:~# ip netns | grep 16b6d042-ce8b-4020-82e0-86a6829a3978
qdhcp-16b6d042-ce8b-4020-82e0-86a6829a3978

root@node-6:~# ps aux | grep 16b6d042-ce8b-4020-82e0-86a6829a3978
nobody   11958  0.0  0.0  28204  1048 ?        S    Jan25   0:00 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap74d4769b-5b --except-interface=lo --pid-file=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/host --addn-hosts=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/opts --dhcp-leasefile=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/leases --dhcp-range=set:tag0,192.168.100.0,static,600s --dhcp-lease-max=256 --conf-file=/etc/neutron/dnsmasq-neutron.conf --domain=openstacklocal


Subnet List

root@node-6:~# neutron --os-tenant-name danny subnet-list
+--------------------------------------+---------------------------------------------------+------------------+------------------------------------------------------+
| id                                   | name                                              | cidr             | allocation_pools                                     |
+--------------------------------------+---------------------------------------------------+------------------+------------------------------------------------------+
| 7d5eafc1-2c10-410d-a8e0-1dc648969fcd | default_network-admin-private_subnet-kvrtht6fwqhr | 192.168.100.0/24 | {"start": "192.168.100.1", "end": "192.168.100.253"} |
+--------------------------------------+---------------------------------------------------+------------------+------------------------------------------------------+

Port List

root@node-6:~# neutron --os-tenant-name danny port-list
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                              |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| 2d7bdc99-9053-448f-9f4a-e71ba8450ac2 |      | fa:16:3e:da:12:25 | {"subnet_id": "7d5eafc1-2c10-410d-a8e0-1dc648969fcd", "ip_address": "192.168.100.254"} |
| 74d4769b-5b7e-4523-ba34-64672d4ac8f1 |      | fa:16:3e:90:bd:88 | {"subnet_id": "7d5eafc1-2c10-410d-a8e0-1dc648969fcd", "ip_address": "192.168.100.2"}   |
| e7b56cc3-8fcd-4a8b-bd00-79b7a625acdc |      | fa:16:3e:d4:88:53 | {"subnet_id": "7d5eafc1-2c10-410d-a8e0-1dc648969fcd", "ip_address": "192.168.100.1"}   |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+

Show Port of DHCP Service

root@node-6:~# neutron --os-tenant-name danny port-show 74d4769b-5b7e-4523-ba34-64672d4ac8f1
+-----------------------+--------------------------------------------------------------------------------------+
| Field                 | Value                                                                                |
+-----------------------+--------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                 |
| allowed_address_pairs |                                                                                      |
| binding:vnic_type     | normal                                                                               |
| device_id             | dhcp7a15cee0-2af1-5441-b1dc-94897ef4dee9-16b6d042-ce8b-4020-82e0-86a6829a3978        |
| device_owner          | network:dhcp                                                                         |
| extra_dhcp_opts       |                                                                                      |
| fixed_ips             | {"subnet_id": "7d5eafc1-2c10-410d-a8e0-1dc648969fcd", "ip_address": "192.168.100.2"} |
| id                    | 74d4769b-5b7e-4523-ba34-64672d4ac8f1                                                 |
| mac_address           | fa:16:3e:90:bd:88                                                                    |
| name                  |                                                                                      |
| network_id            | 16b6d042-ce8b-4020-82e0-86a6829a3978                                                 |
| security_groups       |                                                                                      |
| status                | ACTIVE                                                                               |
| tenant_id             | fc48558ea8684d14a1da30f6c5028064                                                     |
+-----------------------+--------------------------------------------------------------------------------------+


Reference:
Iptables tables and chains




Wednesday, January 20, 2016

[Git] Meld - The difftool

Install meld

sudo apt-get update
sudo apt-get install meld

Setup git default diff tool as meld

git config --global diff.tool meld
git config --global --add difftool.prompt false

Examples

cd <git repo dir>
git difftool           // display file diff one by one
git difftool file_path // display one file diff
git difftool HEAD HEAD~5 filename

Tuesday, January 19, 2016

[Ceilometer] To collect the bandwidth of Neutron L3 router

Ceilometer component in OpenStack is a great project for me to study further in depth I think. I am not going to introduce it because it's not the scope of this post. Instead, I only want to list some resources about Ceilometer to collect the bandwith/traffic accounting of Neutron L3 router.

From this offical document: https://wiki.openstack.org/wiki/Neutron/Metering/Bandwidth
we can know that the actual implementation to collect the bandwidth of Neutron L3 router is based on iptables.
We can use the neutron command to list the related metering rules and labels
$ neutron meter-label-list
$ neutron meter-label-rule-list

The "neutron-meter-agent" will collect the traffic accounting in the iptables chain and push to oslo-messaging. So the following command can show the traffic accounting in Neutron L3 router


root@sn1:~# nova --os-project-name mickey list


root@sn1:~# nova show bb1879bd-1202-4007-9376-41be30e07ae9


root@sn1:~# neutron --os-tenant-name mickey router-list


root@node-5:~# neutron meter-label-rule-list


root@node-5:~# neutron meter-label-rule-show d31f0b9e-8824-4857-a2dc-19f237723f0c


root@node-5:~# neutron meter-label-rule-show 1ae289ef-687e-4303-8036-9a7566dd5365


So, we need to find the metering labels: "neutron-meter-l-78675a84" and "neutron-meter-l-b88c5977"

root@cn3:~#  ip netns exec qrouter-b1741371-ee12-46a1-831b-d3b35429d7c8 iptables -nL -v -x



root@cn3:~# ip netns exec qrouter-b1741371-ee12-46a1-831b-d3b35429d7c8 iptables -t nat -S




Here is the example to query the metering data in Ceilometer
$ root@node-5:~# ceilometer statistics -m bandwidth -q "resource=b88c5977-4445-4f19-9c8f-3d92809f844e;timestamp>=2016-03-01T00:00:00" --period 86400



P.S:
The following article is to introduce "Traffic Accounting with Linux IPTables" which can make us more understand it.
http://www.catonmat.net/blog/traffic-accounting-with-iptables/
https://wiki.openstack.org/wiki/Neutron/Metering/Bandwidth

Friday, January 15, 2016

[Apache2] To increase the concurrent request number of apache server

This post is just the memo for my a little task. Our web app is a Django app in Apache2 web server.
I currently tested our several RESTful APIs and found the maximum concurrent request number is not good enough (around 350) so that I try to tune it to the accepted level, for instance: more than 500.

I also found that if the concurrent request is bigger than the number (350), then I will often see the apache benchmark tool showing the error: apr_socket_recv: Connection reset by peer (104), which means web server suddenly disconnected in the middle of the session.

So, there are two things coming up to my mind: Apache2 configuration and Linux tuning.

For Apache2:
I follow this document to change the mpm_worker.conf as follows:


The defulat value:

ServerLimit 16
StartServers 2
MaxClients 150
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25

What I changed in my mpm_worker.conf
vi /etc/apache2/mods-available/mpm_worker.conf

<IfModule mpm_worker_module>
        ServerLimit              40
        StartServers             4
        MaxClients               1000
        MinSpareThreads          25
        MaxSpareThreads          75
        ThreadLimit              64
        ThreadsPerChild          25
        MaxRequestWorkers        2500
        MaxConnectionsPerChild   25
</IfModule>

For Linux tuning:
I follow this document to enlarge the system variables.

sysctl net.core.somaxconn=1024
sysctl net.core.netdev_max_backlog=2000
sysctl net.ipv4.tcp_max_syn_backlog=2048

Tuesday, December 22, 2015

[Django] The question of changing urlpattern dynamically

A couple of days my colleague gave me a quesiont about how to  change urlpattern dynamically in Django. Well, I indeed take some time to survey the way to do so even though we figure out alternatives to achieve the same result that we want. So, the following list is about the solution:


http://stackoverflow.com/questions/8771070/unable-dynamic-changing-urlpattern-when-changing-database
I perfer to adopt using middleware class to resolve this problem as follows:

An alternative method would be to either create a super pattern that calls a view, which in turn makes a DB call. Another approach is to handle this in a middleware class where you test for a 404 error, check if the pattern is likely to be one of your categories, and then do the DB look up there. I have done this in the past and it's not as bad as it sounds. Look at the django/contrib/flatpages code for a straightforward implementation of this approach.

For how to use middleware class, there is another link for the reference.
http://stackoverflow.com/questions/753909/django-middleware-urls

Friday, December 18, 2015

[IOMMU] The error of "VFIO group is not viable"

As my previous article mentioned, DPDK is a library to accelerate packet transmission between user space application and physical network device. Recently there is an article "Using Open vSwitch 2.4 with DPDK-2.2.0 for Inter-VM NFV Applications" that aspires me to do the similiar test environment to witness how powerful DPDK is.
But, I enounter the error of "VFIO group is not viable" when I start openvswitch daemon. After searching the information on Google, it is related with IOMMU Group issue as described below:

http://vfio.blogspot.tw/2014/08/vfiovga-faq.html
Question 1:

I get the following error when attempting to start the guest:
vfio: error, group $GROUP is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver.
Answer:

There are more devices in the IOMMU group than you're assigning, they all need to be bound to the vfio bus driver (vfio-pci) or pci-stub for the group to be viable.  See my previous post about IOMMU groups for more information.  To reduce the size of the IOMMU group, install the device into a different slot, try a platform that has better isolation support, or (at your own risk) bypass ACS using the ACS override patch.

So, the solutions are these two:
1. Install the device into a different slot
2. Bypass ACS using the ACS overrides patch

For the more explanation in details, you could see this article:
http://vfio.blogspot.tw/2014/08/iommu-groups-inside-and-out.html


The following Scripts are the summary from the article:

# Update Grub
GRUB_CMDLINE_LINUX_DEFAULT="... default_hugepagesz=1G hugepagesz=1G hugepages=16 hugepagesz=2M hugepages=2048 iommu=pt intel_iommu=on isolcpus=1-13,15-27"
update-grub

# Export Variables
export OVS_DIR=/root/soucecode/ovs
export DPDK_DIR=/root/soucecode/dpdk-2.1.0
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
locale-gen "en_US.UTF-8"
dpkg-reconfigure locales

# build DPDK
curl -O http://dpdk.org/browse/dpdk/snapshot/dpdk-2.1.0.tar.gz
tar -xvzf dpdk-2.1.0.tar.gz
cd $DPDK_DIR
sed 's/CONFIG_RTE_BUILD_COMBINE_LIBS=n/CONFIG_RTE_BUILD_COMBINE_LIBS=y/' -i config/common_linuxapp
make install T=x86_64-ivshmem-linuxapp-gcc
cd $DPDK_DIR/x86_64-ivshmem-linuxapp-gcc
EXTRA_CFLAGS="-g -Ofast" make -j10

# build OVS
git clone https://github.com/openvswitch/ovs.git
cd $OVS_DIR
./boot.sh
./configure --with-dpdk="$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/" CFLAGS="-g -Ofast"
make 'CFLAGS=-g -Ofast -march=native' -j10

# Setup OVS
pkill -9 ovs
rm -rf /usr/local/var/run/openvswitch
rm -rf /usr/local/etc/openvswitch/
rm -f /usr/local/etc/openvswitch/conf.db
mkdir -p /usr/local/etc/openvswitch
mkdir -p /usr/local/var/run/openvswitch
cd $OVS_DIR
./ovsdb/ovsdb-tool create /usr/local/etc/openvswitch/conf.db ./vswitchd/vswitch.ovsschema
./ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
./utilities/ovs-vsctl --no-wait init

# Check IOMMU & Huge info
cat /proc/cmdline
grep Huge /proc/meminfo

# Setup Hugetable
mkdir -p /mnt/huge
mkdir -p /mnt/huge_2mb
mount -t hugetlbfs hugetlbfs /mnt/huge
mount -t hugetlbfs none /mnt/huge_2mb -o pagesize=2MB
 
# Bind Ether card
modprobe vfio-pci
cd $DPDK_DIR/tools
./dpdk_nic_bind.py --status
./dpdk_nic_bind.py --bind=vfio-pci 08:00.2

# Start OVS
modprobe openvswitch
$OVS_DIR/vswitchd/ovs-vswitchd --dpdk -c 0x2 -n 4 --socket-mem 2048 -- unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach

# Setup dpdk vhost user port
$OVS_DIR/utilities/ovs-vsctl show
$OVS_DIR/utilities/ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
$OVS_DIR/utilities/ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
$OVS_DIR/utilities/ovs-vsctl add-port br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser
$OVS_DIR/utilities/ovs-vsctl add-port br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser

find /sys/kernel/iommu_groups/ -type l

# Create VMs
qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda /tmp/vm1.qcow2 -boot c -enable-kvm -no-reboot -nographic -net none \
-chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user1 \
-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-device virtio-net-pci,mac=00:00:00:00:dd:01,netdev=mynet1 \
-object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem -mem-prealloc
 
qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda /tmp/vm2.qcow2 -boot c -enable-kvm -no-reboot -nographic -net none \
-chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user2 \
-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-device virtio-net-pci,mac=00:00:00:00:dd:02,netdev=mynet1 \
-object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem -mem-prealloc

Tuesday, December 1, 2015

[Django] How to upload a file to via Django REST Framework?

Here is an example to use Parser: MultiPartParser to parse the media type: multipart/form-data. You also can take a look at this for more in detail: http://www.django-rest-framework.org/api-guide/parsers

Client Side via curl: ( Basically you can use either PUT or POST)
Command format:
curl -X PUT -H 'Content-Type:multipart/form-data' -F 'data=@/{your_file}' -u {account}:{password} http://{your server ip address}/

for instance:
curl -X PUT -H 'Content-Type:multipart/form-data' -F 'data=@/Users/administrator/Desktop/2015.12.01.yaml' -u danny:password http://192.168.1.100:8000/api/upload_func/file/



Server side ( Django REST Framework )

in your settings.py, add the following lines
==>
    'DEFAULT_PARSER_CLASSES': (
        'rest_framework.parsers.MultiPartParser',
    )


For your_api.py, you can refer to this example as below:

class UploadAction(APIView):
    permission_classes = (IsAuthenticated, IsAdminOrReadOnly)
    parser_classes = (MultiPartParser,)

    def put(self, request, format=None):
        try:
            # Read uploaded files
            my_file = request.FILES['data']
            filename = "/tmp/" + str(my_file)
            with open(filename, 'wb+') as temp_file:
                for chunk in my_file.chunks():
                    temp_file.write(chunk)
                temp_file.close()
        except Exception as e:
            return Response(status=status.HTTP_500_INTERNAL_SERVER_ERROR)
        return Response(status=status.HTTP_201_CREATED)


P.S: Please change 'data' to others if you put the different variable name in curl command!!

Reference:
http://stackoverflow.com/questions/21012538/retrieve-json-from-request-files-in-django-without-writing-to-file

If you want to get the content of uploaded file, you can directly use the. read() api.
Something like:
if request.FILES.has_key('data'):
    file = request.Files['data']
    data = file.read()
    #you have file contents in data