Showing posts with label Neutron. Show all posts
Showing posts with label Neutron. Show all posts

Monday, July 25, 2016

[Neutron] The first glance of L3HA mode in OpenStack Neutron ( Liberty version )

I just quickly take the first glance of L3HA mode in OpenStack Neutron ( Liberty version ) and is based on my tenant environment as follows:

My tenant environment

# neutron router-list




# neutron net-list

# neutron subnet-list

The Topology view looks like this:


Here I have 2 instances in my tenant:


So, if I use the instance: daanny_vm1 to ping danny_vm2, due to the different subnets, this action will trigger L3 vrouter function.

# ping 192.168.66.4 ( danny_vm2 )

# ip netns exec qrouter-f1e03fef-cccf-43de-9d35-56d11d636765 tcpdump -eln -i qr-4433f31f-5d icmp


The interface qr-4433f31f-5d is my subnet 192.168.44.0/24's gateway port as follows:
# neutron --os-tenant-name danny port-list | grep 4433f31f-5d
| 4433f31f-5d93-4fe4-868a-04ddcc38be20 |                                                 | fa:16:3e:25:22:b3 | {"subnet_id": "d169f180-4304-42f0-b11f-e094287bcd00", "ip_address": "192.168.44.1"}  |

Keepalived related

L3HA mode is havily relied on the daemon: Keepalived and this daemon is existed in qrouter namespace.

# vi /var/lib/neutron/ha_confs/f1e03fef-cccf-43de-9d35-56d11d636765/keepalived.conf
vrrp_instance VR_1 {
    state BACKUP
    interface ha-857640ad-a6
    virtual_router_id 1
    priority 50
    garp_master_delay 60
    nopreempt
    advert_int 2
    track_interface {
        ha-857640ad-a6
    }
    virtual_ipaddress {
        169.254.0.1/24 dev ha-857640ad-a6
    }
    virtual_ipaddress_excluded {
        10.12.20.32/16 dev qg-f02984c6-dc
        10.12.20.33/32 dev qg-f02984c6-dc
        192.168.44.1/24 dev qr-4433f31f-5d
        192.168.55.1/24 dev qr-16e20a36-fc
        192.168.66.1/24 dev qr-35235c4f-64
        fe80::f816:3eff:fe0d:2702/64 dev qr-16e20a36-fc scope link
        fe80::f816:3eff:fe25:22b3/64 dev qr-4433f31f-5d scope link
        fe80::f816:3eff:fe51:30a1/64 dev qg-f02984c6-dc scope link
        fe80::f816:3eff:fe8f:a85b/64 dev qr-35235c4f-64 scope link
    }
    virtual_routes {
        0.0.0.0/0 via 10.12.0.254 dev qg-f02984c6-dc
    }
}

There are other two files under /var/lib/neutron/ha_confs/<< qrouter uuid >>/
neutron-keepalived-state-change.log ==> log file
state ==> HA status


# find -L /proc/[1-9]*/task/*/ns/net -samefile /run/netns/qrouter-f1e03fef-cccf-43de-9d35-56d11d636765 | cut -d/ -f5
2276895
2276896
2277216
2277217
3284547

# ps aux | grep -e "2276895|2276896|2277216|2277217|3284547"
neutron  2276895  0.0  0.0 126160 41364 ?        S    Jul22   0:00 /usr/bin/python2.7 /usr/bin/neutron-keepalived-state-change --router_id=f1e03fef-cccf-43de-9d35-56d11d636765 --namespace=qrouter-f1e03fef-cccf-43de-9d35-56d11d636765 --conf_dir=/var/lib/neutron/ha_confs/f1e03fef-cccf-43de-9d35-56d11d636765 --monitor_interface=ha-857640ad-a6 --monitor_cidr=169.254.0.1/24 --pid_file=/var/lib/neutron/external/pids/f1e03fef-cccf-43de-9d35-56d11d636765.monitor.pid --state_path=/var/lib/neutron --user=119 --group=125
root     2276896  0.0  0.0   6696   756 ?        S    Jul22   0:00 ip -o monitor address
root     2277216  0.0  0.0  44752   856 ?        Ss   Jul22   0:13 keepalived -P -f /var/lib/neutron/ha_confs/f1e03fef-cccf-43de-9d35-56d11d636765/keepalived.conf -p /var/lib/neutron/ha_confs/f1e03fef-cccf-43de-9d35-56d11d636765.pid -r /var/lib/neutron/ha_confs/f1e03fef-cccf-43de-9d35-56d11d636765.pid-vrrp
root     2277217  0.0  0.0  51148  1712 ?        S    Jul22   0:24 keepalived -P -f /var/lib/neutron/ha_confs/f1e03fef-cccf-43de-9d35-56d11d636765/keepalived.conf -p /var/lib/neutron/ha_confs/f1e03fef-cccf-43de-9d35-56d11d636765.pid -r /var/lib/neutron/ha_confs/f1e03fef-cccf-43de-9d35-56d11d636765.pid-vrrp
neutron  3284547  0.0  0.0 172176 36032 ?        S    Jul22   0:00 /usr/bin/python2.7 /usr/bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/f1e03fef-cccf-43de-9d35-56d11d636765.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --router_id=f1e03fef-cccf-43de-9d35-56d11d636765 --state_path=/var/lib/neutron --metadata_port=8775 --metadata_proxy_user=119 --metadata_proxy_group=125 --verbose --log-file=neutron-ns-metadata-proxy-f1e03fef-cccf-43de-9d35-56d11d636765.log --log-dir=/var/log/neutron


# neutron l3-agent-list-hosting-router f1e03fef-cccf-43de-9d35-56d11d636765

Then, we learn that the master vrouter is in node-8. 
There are other ways to know which node is master:
1. use the command to see if the interface qr-xxxxx and qg-xxxxx have ip address or not. If yes, this node is master.
  • ip netns exec qrouter-f1e03fef-cccf-43de-9d35-56d11d636765 ip a
2. Check the following file that contains "master" or not.
  • vim /var/lib/neutron/ha_confs/<< qrouter uuid >>/state
For more details:
http://www.slideshare.net/orimanabu/l3-ha-vrrp20141201



Wednesday, June 15, 2016

[Neutron] Neutron Cheat Cheat Sheet

In recent days, the hand drawing style is becoming more and more pervasive in Taiwan. I don't know how it happens, but at least I can draw the Neutron cheat cheat sheet to echo this kind of style for fun.

P.S: This picture is originally for my colleagues to trouble shooting the Neutron networking problems.

Thursday, June 2, 2016

[Neutron and SDN] Warm-up for understaning the integration of Neutron and SDN

I just spend some time to study the integration of Neutron and SDN. Furthermore, I also take a look at how ODL and ONOS are integrated with OpenStack Neutron. The following content and picture ( with URL link) are excerpted from the variety of resource in the internet. Some of sections has my comment with P.S. I think it can give you a clear concept of Neutron and SDN controller.

Neutron and SDN
P.S: This picture gives an overall architecture about Neutron and SDN controller that are integrated together.


When an OpenStack user performs any networking related operation (create/update/delete/read on network, subnet and port resources) the typical flow would be as follows:
  1. The user operation on the OpenStack dashboard (Horizon) will be translated into a corresponding networking API and sent to the Neutron server.
  2. The Neutron server receives the request and passes the same to the configured plugin (assume ML2 is configured with an ODL mechanism driver and a VXLAN type driver).
  3. The Neutron server/plugin will make the appropriate change to the DB.
  4. The plugin will invoke the corresponding REST API to the SDN controller (assume an ODL).
  5. ODL, upon receiving this request, may perform necessary changes to the network elements using any of the southbound plugins/protocols, such as OpenFlow, OVSDB or OF-Config.




We should note there exists different integration options with the SDN controller and OpenStack; for example:
  1. one can completely eliminate RPC communications between the Neutron server and agents on the compute node, with the SDN controller being the sole entity managing the network
  2. or the SDN controller manages only the physical switches, and the virtual switches can be managed from the Neutron server directly.


Neutron底層網絡的兩種模型示意如下。
第一種模型a) Neutron相當於SDN控制器,plugin與agent間的通信機制(如rpc)就相當於簡單的南向協議。第二種模型中Neutron作為SDN應用,將業務需求告知SDN控制器,SDN控制器再通過五花八門的南向協議遠程控製網絡設備。
第二種模型b) 也可以把Neutron看做超級控制器或者網絡編排器,去完成OpenStack中網絡業務的集中分發。

Plugin Agent

P.S: This picture shows the process flow between agents, api server and ovs when creating a VM.
http://www.innervoice.in/blogs/wp-content/uploads/2015/03/Plugin-Agents.jpg


About ML2



Neutron plugin体系


How OpenDaylight to integrate with Neutron



How is ONOS to integrate with Neutron

SONA Architecture



onos-networking plugin just forwards (or calls) REST calls from nova to ONOS, and OpenstackSwitching app receives the API calls and returns OK. Main functions to implement the virtual networks are handled in OpenstackSwitching application.

OpenstackSwitching (App on ONOS)

Neutron + SDN Controller (ONOS)  

P.S: ONOS provides its ONOSMechanismDriver instead of OpenvswitchMechanismDriver


Reference:
Here is an article to talk about writing a dummy mechanism driver to record variables and data in logs
http://blog.csdn.net/yanheven1/article/details/47357537

Tuesday, January 26, 2016

[Neutron] The Neutron Networking trace records

This article maybe is not given too much explanation or description about the script and result is because it is my trace record for my study from my OpenStack environment, which is based on the following references:
https://www.rdoproject.org/networking/networking-in-too-much-detail/
https://blogs.oracle.com/ronen/entry/diving_into_openstack_network_architecture1
https://www.gitbook.com/book/yeasy/openstack_understand_neutron/details

This picture is quite important because it tells the neutron networking architecture clearly.


So, I have a VM/instance, which is "mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4" and we can get

> nova --os-tenant-name danny list
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------------------------------+
| ID                                   | Name                                                  | Status | Task State | Power State | Networks                                   |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------------------------------+
| aa2f621c-65e0-4d89-bb6d-c66054ee9250 | mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4 | ACTIVE | -          | Running     | default_network=192.168.100.1, 10.14.1.154 |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------------------------------+

The instance id is aa2f621c-65e0-4d89-bb6d-c66054ee9250

root@node-6:~# nova show aa2f621c-65e0-4d89-bb6d-c66054ee9250
+--------------------------------------+---------------------------------------------------------------+
| Property                             | Value                                                         |
+--------------------------------------+---------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                          |
| OS-EXT-SRV-ATTR:host                 | node-5.domain.tld                                             |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | node-5.domain.tld                                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000630                                             |
| OS-EXT-STS:power_state               | 1                                                             |
| OS-EXT-STS:task_state                | -                                                             |
| OS-EXT-STS:vm_state                  | active                                                        |
| OS-SRV-USG:launched_at               | 2016-01-25T06:17:13.000000                                    |
| OS-SRV-USG:terminated_at             | -                                                             |
| accessIPv4                           |                                                               |
| accessIPv6                           |                                                               |
| config_drive                         |                                                               |
| created                              | 2016-01-25T06:17:05Z                                          |
| default_network network              | 192.168.100.1, 10.14.1.154                                    |
| flavor                               | 1core2GBmemory20GBdisk (06d3aafd-8819-4034-a20d-2a4d2340bae0) |
| hostId                               | 8deeb163a8215e6d14e89b44189791aa699211a697d625b85a82334a      |
| id                                   | aa2f621c-65e0-4d89-bb6d-c66054ee9250                          |
| image                                | Ubuntu14.04-2015.3.0 (0d95c5a8-ea4e-45a4-ba79-6b1c8f2acf46)   |
| key_name                             | 2d8ed97974a34782ba3c1eda2cc1f705                              |
| metadata                             | {}                                                            |
| name                                 | mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4         |
| os-extended-volumes:volumes_attached | []                                                            |
| progress                             | 0                                                             |
| security_groups                      | default_sg                                                    |
| status                               | ACTIVE                                                        |
| tenant_id                            | fc48558ea8684d14a1da30f6c5028064                              |
| updated                              | 2016-01-25T06:24:45Z                                          |
| user_id                              | 2d8ed97974a34782ba3c1eda2cc1f705                              |
+--------------------------------------+---------------------------------------------------------------+


$ nova-manage vm list | grep mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4
mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4 node-5.domain.tld 1core2GBmemory20GBdisk active     2016-01-25 06:17:13        0d95c5a8-ea4e-45a4-ba79-6b1c8f2acf46                     fc48558ea8684d14a1da30f6c5028064 2d8ed97974a34782ba3c1eda2cc1f705 nova       0

From now on, we can go through the neutron networking on Compute and Neutron/Service host

Compute Node:

Find the related interface and bridge that belongs to our VM

go to instance folder
$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# grep -i tap libvirt.xml
      <target dev="tape7b56cc3-8f"/>

$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# iptables -S | grep tape7b56cc3-8f
-A neutron-openvswi-FORWARD -m physdev --physdev-out tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-sg-chain
-A neutron-openvswi-FORWARD -m physdev --physdev-in tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-sg-chain
-A neutron-openvswi-INPUT -m physdev --physdev-in tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-oe7b56cc3-8
-A neutron-openvswi-sg-chain -m physdev --physdev-out tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-ie7b56cc3-8
-A neutron-openvswi-sg-chain -m physdev --physdev-in tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-oe7b56cc3-8

Apply the security group in Linux Bridge
$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# iptables -L neutron-openvswi-sg-chain | grep tape7b56cc3-8f
neutron-openvswi-ie7b56cc3-8  all  --  anywhere             anywhere             PHYSDEV match --physdev-out tape7b56cc3-8f --physdev-is-bridged
neutron-openvswi-oe7b56cc3-8  all  --  anywhere             anywhere             PHYSDEV match --physdev-in tape7b56cc3-8f --physdev-is-bridged


root@node-5:~# brctl show
....
qbre7b56cc3-8f          8000.1ab45f3f1100       no              qvbe7b56cc3-8f
                                                        tape7b56cc3-8f
....


$ ovs-vsctl show
    ....
        Port "qvoe7b56cc3-8f"
            tag: 93
            Interface "qvoe7b56cc3-8f"
    Bridge br-prv
        Port br-prv
            Interface br-prv
                type: internal
        Port "p_br-prv-0"
            Interface "p_br-prv-0"
                type: internal
        Port phy-br-prv
            Interface phy-br-prv
                type: patch
                options: {peer=int-br-prv}
    Bridge br-floating
        Port "p_br-floating-0"
            Interface "p_br-floating-0"
                type: internal
        Port br-floating
            Interface br-floating
                type: internal
    ovs_version: "2.3.1"


or here is another approach:

Use virsh command:
# virsh list
# virsh domiflist <your instance>

Screen Shot 2016-03-18 at 2.02.49 PM.png

Find the bridge that VM's interface is connected
# brctl show qbr10257204-b0
Screen Shot 2016-03-18 at 2.04.35 PM.png

Find veth pairs
# ethtool -S qvb10257204-b0
# ip link list |grep ‘41: ‘
Screen Shot 2016-03-18 at 2.08.01 PM.png

Screen Shot 2016-03-18 at 2.09.36 PM.png

OVS flow tables

$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# ovs-ofctl dump-flows br-floating
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=2944114.894s, table=0, n_packets=5330801, n_bytes=454517092, idle_age=0, hard_age=65534, priority=0 actions=NORMAL

$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# ovs-ofctl dump-flows br-prv
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=2943385.989s, table=0, n_packets=54475876, n_bytes=21932146773, idle_age=0, hard_age=65534, priority=1 actions=NORMAL
 cookie=0x0, duration=1212275.443s, table=0, n_packets=119001, n_bytes=9280541, idle_age=1, hard_age=65534, priority=4,in_port=2,dl_vlan=53 actions=mod_vlan_vid:289,NORMAL
 cookie=0x0, duration=2939213.555s, table=0, n_packets=40203246, n_bytes=7594046533, idle_age=0, hard_age=65534, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:400,NORMAL
 cookie=0x0, duration=443296.428s, table=0, n_packets=17855, n_bytes=1651227, idle_age=27, hard_age=65534, priority=4,in_port=2,dl_vlan=78 actions=mod_vlan_vid:388,NORMAL
 cookie=0x0, duration=440574.253s, table=0, n_packets=90232, n_bytes=6655951, idle_age=14, hard_age=65534, priority=4,in_port=2,dl_vlan=81 actions=mod_vlan_vid:361,NORMAL
 cookie=0x0, duration=430464.292s, table=0, n_packets=97628, n_bytes=7077738, idle_age=1, hard_age=65534, priority=4,in_port=2,dl_vlan=87 actions=mod_vlan_vid:309,NORMAL
 cookie=0x0, duration=286613.370s, table=0, n_packets=74230, n_bytes=5428601, idle_age=1, hard_age=65534, priority=4,in_port=2,dl_vlan=91 actions=mod_vlan_vid:321,NORMAL
 cookie=0x0, duration=188.222s, table=0, n_packets=276, n_bytes=25628, idle_age=0, priority=4,in_port=2,dl_vlan=97 actions=mod_vlan_vid:374,NORMAL
 cookie=0x0, duration=8355.066s, table=0, n_packets=2228, n_bytes=178873, idle_age=85, priority=4,in_port=2,dl_vlan=93 actions=mod_vlan_vid:255,NORMAL
 cookie=0x0, duration=2869178.234s, table=0, n_packets=1063994, n_bytes=83782069, idle_age=19, hard_age=65534, priority=4,in_port=2,dl_vlan=2 actions=mod_vlan_vid:206,NORMAL
 cookie=0x0, duration=1148558.787s, table=0, n_packets=41929, n_bytes=4005468, idle_age=14, hard_age=65534, priority=4,in_port=2,dl_vlan=55 actions=mod_vlan_vid:295,NORMAL
 cookie=0x0, duration=436450.974s, table=0, n_packets=268258, n_bytes=26273333, idle_age=0, hard_age=65534, priority=4,in_port=2,dl_vlan=86 actions=mod_vlan_vid:305,NORMAL
 cookie=0x0, duration=1134136.903s, table=0, n_packets=41386, n_bytes=3955238, idle_age=107, hard_age=65534, priority=4,in_port=2,dl_vlan=57 actions=mod_vlan_vid:274,NORMAL
 cookie=0x0, duration=349629.927s, table=0, n_packets=12556, n_bytes=1235546, idle_age=4, hard_age=65534, priority=4,in_port=2,dl_vlan=88 actions=mod_vlan_vid:325,NORMAL
 cookie=0x0, duration=956159.621s, table=0, n_packets=1414006, n_bytes=2522828496, idle_age=6, hard_age=65534, priority=4,in_port=2,dl_vlan=68 actions=mod_vlan_vid:383,NORMAL
 cookie=0x0, duration=444184.489s, table=0, n_packets=338, n_bytes=37368, idle_age=65534, hard_age=65534, priority=4,in_port=2,dl_vlan=76 actions=mod_vlan_vid:212,NORMAL
 cookie=0x0, duration=341208.593s, table=0, n_packets=4535, n_bytes=806468, idle_age=104, hard_age=65534, priority=4,in_port=2,dl_vlan=89 actions=mod_vlan_vid:307,NORMAL
 cookie=0x0, duration=1046690.998s, table=0, n_packets=341, n_bytes=37606, idle_age=65534, hard_age=65534, priority=4,in_port=2,dl_vlan=63 actions=mod_vlan_vid:247,NORMAL
 cookie=0x0, duration=1469822.661s, table=0, n_packets=54812, n_bytes=5159092, idle_age=83, hard_age=65534, priority=4,in_port=2,dl_vlan=47 actions=mod_vlan_vid:324,NORMAL
 cookie=0x0, duration=609252.595s, table=0, n_packets=473, n_bytes=403134, idle_age=65534, hard_age=65534, priority=4,in_port=2,dl_vlan=72 actions=mod_vlan_vid:277,NORMAL
 cookie=0x0, duration=2943385.550s, table=0, n_packets=5948, n_bytes=493768, idle_age=189, hard_age=65534, priority=2,in_port=2 actions=drop

$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=2943396.106s, table=0, n_packets=64104253, n_bytes=14989605175, idle_age=0, hard_age=65534, priority=1 actions=NORMAL
 cookie=0x0, duration=1134146.751s, table=0, n_packets=51173, n_bytes=39495387, idle_age=1, hard_age=65534, priority=3,in_port=1,dl_vlan=274 actions=mod_vlan_vid:57,NORMAL
 cookie=0x0, duration=443306.272s, table=0, n_packets=22067, n_bytes=18326623, idle_age=37, hard_age=65534, priority=3,in_port=1,dl_vlan=388 actions=mod_vlan_vid:78,NORMAL
 cookie=0x0, duration=1046700.841s, table=0, n_packets=244, n_bytes=37459, idle_age=65534, hard_age=65534, priority=3,in_port=1,dl_vlan=247 actions=mod_vlan_vid:63,NORMAL
 cookie=0x0, duration=8364.913s, table=0, n_packets=2939, n_bytes=3336868, idle_age=9, priority=3,in_port=1,dl_vlan=255 actions=mod_vlan_vid:93,NORMAL
 cookie=0x0, duration=2869188.084s, table=0, n_packets=1110068, n_bytes=287840569, idle_age=1, hard_age=65534, priority=3,in_port=1,dl_vlan=206 actions=mod_vlan_vid:2,NORMAL
 cookie=0x0, duration=1148568.632s, table=0, n_packets=51652, n_bytes=39880915, idle_age=24, hard_age=65534, priority=3,in_port=1,dl_vlan=295 actions=mod_vlan_vid:55,NORMAL
 cookie=0x0, duration=609262.437s, table=0, n_packets=4995, n_bytes=345118, idle_age=14, hard_age=65534, priority=3,in_port=1,dl_vlan=277 actions=mod_vlan_vid:72,NORMAL
 cookie=0x0, duration=1212285.287s, table=0, n_packets=127910, n_bytes=47848009, idle_age=5, hard_age=65534, priority=3,in_port=1,dl_vlan=289 actions=mod_vlan_vid:53,NORMAL
 cookie=0x0, duration=1469832.507s, table=0, n_packets=66919, n_bytes=50958284, idle_age=93, hard_age=65534, priority=3,in_port=1,dl_vlan=324 actions=mod_vlan_vid:47,NORMAL
 cookie=0x0, duration=430474.139s, table=0, n_packets=101531, n_bytes=22514877, idle_age=8, hard_age=65534, priority=3,in_port=1,dl_vlan=309 actions=mod_vlan_vid:87,NORMAL
 cookie=0x0, duration=2939223.400s, table=0, n_packets=39197886, n_bytes=15878291311, idle_age=1, hard_age=65534, priority=3,in_port=1,dl_vlan=400 actions=mod_vlan_vid:1,NORMAL
 cookie=0x0, duration=440584.097s, table=0, n_packets=94708, n_bytes=24882334, idle_age=1, hard_age=65534, priority=3,in_port=1,dl_vlan=361 actions=mod_vlan_vid:81,NORMAL
 cookie=0x0, duration=349639.771s, table=0, n_packets=15123, n_bytes=10862299, idle_age=14, hard_age=65534, priority=3,in_port=1,dl_vlan=325 actions=mod_vlan_vid:88,NORMAL
 cookie=0x0, duration=341218.441s, table=0, n_packets=4854, n_bytes=1123080, idle_age=113, hard_age=65534, priority=3,in_port=1,dl_vlan=307 actions=mod_vlan_vid:89,NORMAL
 cookie=0x0, duration=956169.465s, table=0, n_packets=1976310, n_bytes=195788064, idle_age=2, hard_age=65534, priority=3,in_port=1,dl_vlan=383 actions=mod_vlan_vid:68,NORMAL
 cookie=0x0, duration=444194.333s, table=0, n_packets=251, n_bytes=37638, idle_age=65534, hard_age=65534, priority=3,in_port=1,dl_vlan=212 actions=mod_vlan_vid:76,NORMAL
 cookie=0x0, duration=436460.818s, table=0, n_packets=448207, n_bytes=582425969, idle_age=0, hard_age=65534, priority=3,in_port=1,dl_vlan=305 actions=mod_vlan_vid:86,NORMAL
 cookie=0x0, duration=198.065s, table=0, n_packets=71, n_bytes=13149, idle_age=4, priority=3,in_port=1,dl_vlan=374 actions=mod_vlan_vid:97,NORMAL
 cookie=0x0, duration=286623.215s, table=0, n_packets=76707, n_bytes=17851968, idle_age=6, hard_age=65534, priority=3,in_port=1,dl_vlan=321 actions=mod_vlan_vid:91,NORMAL
 cookie=0x0, duration=2943395.505s, table=0, n_packets=125949, n_bytes=9745756, idle_age=17, hard_age=65534, priority=2,in_port=1 actions=drop
 cookie=0x0, duration=2943396.052s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop

VLAN Translation

root@node-5:~# ovs-ofctl dump-flows br-int | grep 255
 cookie=0x0, duration=97049.752s, table=0, n_packets=7155, n_bytes=6796405, idle_age=61, hard_age=65534, priority=3,in_port=1,dl_vlan=255 actions=mod_vlan_vid:93,NORMAL

From the external vlan (segmentation id ) 255 to internal vlan 93

root@node-5:~# ovs-ofctl dump-flows br-prv | grep 255
 cookie=0x0, duration=97125.859s, table=0, n_packets=5278, n_bytes=475415, idle_age=40, hard_age=65534, priority=4,in_port=2,dl_vlan=93 actions=mod_vlan_vid:255,NORMAL

From the internal vlan 93 to external vlan (segmentation id ) 255

P.S: how to get vlan id please refer to the section "To find the vlan id for my tenant/group"  below


Use command to show of-port number:
ovs-ofctl show <ovs-bridge>

Use command to show vlan-tag on port:
ovs-vsctl show

Network Node:

Router List

root@node-6: ~# neutron --OS-tenant-name Danny router-list
+--------------------------------------+-------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
id                                   | name                                      | external_gateway_info                                                                                                                                                                   |
+--------------------------------------+-------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 6a222d9d-71da-4db6-891b-87d4b6ee8536 | default_network-admin-router-rcg7xvxhll2i | {"network_id": "7cd5fc6c-e47a-420c-8d15-aa51747564d8", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "8e2aa50c-cd0e-4596-97fb-dfc1ecc63245", "ip_address": "10.14.1.153"}]} |
+--------------------------------------+-------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+


root@node-6:~# ip net exec qrouter-6a222d9d-71da-4db6-891b-87d4b6ee8536 ip route
default via 10.14.0.254 dev qg-7853deb4-04
10.14.0.0/16 dev qg-7853deb4-04  proto kernel  scope link  src 10.14.1.153
192.168.100.0/24 dev qr-2d7bdc99-90  proto kernel  scope link  src 192.168.100.254


root@node-6:~# ip net exec qrouter-6a222d9d-71da-4db6-891b-87d4b6ee8536 iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat
-N neutron-l3-agent-snat
-N neutron-postrouting-bottom
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-OUTPUT -d 10.14.1.154/32 -j DNAT --to-destination 192.168.100.1
-A neutron-l3-agent-POSTROUTING ! -i qg-7853deb4-04 ! -o qg-7853deb4-04 -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8775
-A neutron-l3-agent-PREROUTING -d 10.14.1.154/32 -j DNAT --to-destination 192.168.100.1
-A neutron-l3-agent-float-snat -s 192.168.100.1/32 -j SNAT --to-source 10.14.1.154
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -s 192.168.100.0/24 -j SNAT --to-source 10.14.1.153
-A neutron-postrouting-bottom -j neutron-l3-agent-snat

P.S: The VM's floating IP is 10.14.1.154

Network List

root@node-6:~# neutron --os-tenant-name danny net-list
+--------------------------------------+-----------------+-------------------------------------------------------+
id                                   | name            | subnets                                               |
+--------------------------------------+-----------------+-------------------------------------------------------+
| 16b6d042-ce8b-4020-82e0-86a6829a3978 | default_network | 7d5eafc1-2c10-410d-a8e0-1dc648969fcd 192.168.100.0/24 |
| 7cd5fc6c-e47a-420c-8d15-aa51747564d8 | net04_ext       | 8e2aa50c-cd0e-4596-97fb-dfc1ecc63245                  |
+--------------------------------------+-----------------+-------------------------------------------------------+


Responsible network node for DHCP and L3

You can obtain this now that you’ve got the network’s UUID by doing the following:
root@node-6:~# neutron dhcp-agent-list-hosting-net 16b6d042-ce8b-4020-82e0-86a6829a3978
+--------------------------------------+-------------------+----------------+-------+
| id                                   | host              | admin_state_up | alive |
+--------------------------------------+-------------------+----------------+-------+
| 34263976-2f1b-48bd-a40e-eb6f0e77c5f4 | node-6.domain.tld | True           | :-)   |
+--------------------------------------+-------------------+----------------+-------+

To find the vlan id for my tenant/group

root@node-6:~# neutron net-show -F provider:segmentation_id 16b6d042-ce8b-4020-82e0-86a6829a3978
+--------------------------+-------+
| Field                    | Value |
+--------------------------+-------+
| provider:segmentation_id | 255   |
+--------------------------+-------+

VLAN Translation

root@node-6:~# ovs-ofctl dump-flows br-prv | grep 255
 cookie=0x0, duration=97539.474s, table=0, n_packets=7188, n_bytes=6770236, idle_age=65, hard_age=65534, priority=4,in_port=2,dl_vlan=330 actions=mod_vlan_vid:255,NORMAL

root@node-6:~# ovs-ofctl dump-flows br-int | grep 255
 cookie=0x0, duration=97562.780s, table=0, n_packets=5264, n_bytes=516743, idle_age=89, hard_age=65534, priority=3,in_port=1,dl_vlan=255 actions=mod_vlan_vid:330,NORMAL

To find the dhcp namesapce

root@node-6:~# ip netns | grep 16b6d042-ce8b-4020-82e0-86a6829a3978
qdhcp-16b6d042-ce8b-4020-82e0-86a6829a3978

root@node-6:~# ps aux | grep 16b6d042-ce8b-4020-82e0-86a6829a3978
nobody   11958  0.0  0.0  28204  1048 ?        S    Jan25   0:00 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap74d4769b-5b --except-interface=lo --pid-file=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/host --addn-hosts=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/opts --dhcp-leasefile=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/leases --dhcp-range=set:tag0,192.168.100.0,static,600s --dhcp-lease-max=256 --conf-file=/etc/neutron/dnsmasq-neutron.conf --domain=openstacklocal


Subnet List

root@node-6:~# neutron --os-tenant-name danny subnet-list
+--------------------------------------+---------------------------------------------------+------------------+------------------------------------------------------+
| id                                   | name                                              | cidr             | allocation_pools                                     |
+--------------------------------------+---------------------------------------------------+------------------+------------------------------------------------------+
| 7d5eafc1-2c10-410d-a8e0-1dc648969fcd | default_network-admin-private_subnet-kvrtht6fwqhr | 192.168.100.0/24 | {"start": "192.168.100.1", "end": "192.168.100.253"} |
+--------------------------------------+---------------------------------------------------+------------------+------------------------------------------------------+

Port List

root@node-6:~# neutron --os-tenant-name danny port-list
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                              |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| 2d7bdc99-9053-448f-9f4a-e71ba8450ac2 |      | fa:16:3e:da:12:25 | {"subnet_id": "7d5eafc1-2c10-410d-a8e0-1dc648969fcd", "ip_address": "192.168.100.254"} |
| 74d4769b-5b7e-4523-ba34-64672d4ac8f1 |      | fa:16:3e:90:bd:88 | {"subnet_id": "7d5eafc1-2c10-410d-a8e0-1dc648969fcd", "ip_address": "192.168.100.2"}   |
| e7b56cc3-8fcd-4a8b-bd00-79b7a625acdc |      | fa:16:3e:d4:88:53 | {"subnet_id": "7d5eafc1-2c10-410d-a8e0-1dc648969fcd", "ip_address": "192.168.100.1"}   |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+

Show Port of DHCP Service

root@node-6:~# neutron --os-tenant-name danny port-show 74d4769b-5b7e-4523-ba34-64672d4ac8f1
+-----------------------+--------------------------------------------------------------------------------------+
| Field                 | Value                                                                                |
+-----------------------+--------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                 |
| allowed_address_pairs |                                                                                      |
| binding:vnic_type     | normal                                                                               |
| device_id             | dhcp7a15cee0-2af1-5441-b1dc-94897ef4dee9-16b6d042-ce8b-4020-82e0-86a6829a3978        |
| device_owner          | network:dhcp                                                                         |
| extra_dhcp_opts       |                                                                                      |
| fixed_ips             | {"subnet_id": "7d5eafc1-2c10-410d-a8e0-1dc648969fcd", "ip_address": "192.168.100.2"} |
| id                    | 74d4769b-5b7e-4523-ba34-64672d4ac8f1                                                 |
| mac_address           | fa:16:3e:90:bd:88                                                                    |
| name                  |                                                                                      |
| network_id            | 16b6d042-ce8b-4020-82e0-86a6829a3978                                                 |
| security_groups       |                                                                                      |
| status                | ACTIVE                                                                               |
| tenant_id             | fc48558ea8684d14a1da30f6c5028064                                                     |
+-----------------------+--------------------------------------------------------------------------------------+


Reference:
Iptables tables and chains