Thursday, January 28, 2016

[Python] How to get key and value from a custom header or from a form data with django rest framework

The following python code is part of handling a post method of my Restful API. I only excerpt it because I want to only hightlight my experiment of dealing with key and value from a custom HTTP header or from a form data.

""" Deal with the header property """
header_property_kwargs = {}
for field in request.META:
    logger.debug("...[Testing]...field,value==> %s,%s" % (field, request.META[field]))
    if 'HTTP_X_EXTRA_PROPERTY_' in field:        
     new_field = field.replace('HTTP_X_EXTRA_PROPERTY_','').lower()
        header_property_kwargs[new_field] = request.META[field]
logger.debug("...[Testing]...header_properties==> %s" % header_property_kwargs)

""" Deal with the extra_property  """
extra_property_kwargs = {}
if 'extra_property' in request.DATA.keys():
    extra_property = request.DATA['extra_property']
    try:
        property_list = extra_property.split("|")
        for property_item in property_list:
            key_value = property_item.split("=")
            extra_property_kwargs[key_value[0]] = key_value[1]
    except Exception as e:
        logger.error("Cannot parse extra_property" % e)
        return Response(status=status.HTTP_400_BAD_REQUEST)
logger.debug("...[Testing]...extra_properties==> %s" % extra_property_kwargs)

If I use the curl command to try my code, I will get the result:

curl -v -X POST -H 'Content-Type: application/json; indent=4' -H 'x-extra-property-version1: ccccc' -H 'x-extra-property-version2: ddddd' -u teyen.liu@gmail.com:123456 -d '{"extra_property":"version1=aaa|version2=bbb"}'  http://10.0.2.10:8000/api/test/

[Testing]...header_properties==> {'version1': 'ccccc', 'version2': 'ddddd'}
[Testing]...extra_properties==> {u'version1': u'aaa', u'version2': u'bbb'}

It proves that both approaches are able to do the same job. 
P.S: From HTTP Header it seams to use "X-" prefix string otherwise the parameter won't appear in request's header.


Reference:
http://stackoverflow.com/questions/28345842/getting-custom-header-on-post-request-with-django-rest-framework
curl --header "X-MyHeader: 123" --data "test=test" http://127.0.0.1:8000/api/update_log/
The name of the meta data attribute of request is in upper case:
print request.META
Your header will be available as:
request.META['HTTP_X_MYHEADER']
Or:
request.META.get('HTTP_X_MYHEADER') # return `None` if no such header

[Python] The a little thing about argument and kwargs


http://stackoverflow.com/questions/1769403/understanding-kwargs-in-python
You can use **kwargs to let your functions take an arbitrary number of keyword arguments:
>>> def print_keyword_args(**kwargs):
...     # kwargs is a dict of the keyword args passed to the function
...     for key, value in kwargs.iteritems():
...         print "%s = %s" % (key, value)
... 
>>> print_keyword_args(first_name="John", last_name="Doe")
first_name = John
last_name = Doe
You can also use the **kwargs syntax when calling functions by constructing a dictionary of keyword arguments and passing it to your function:
>>> kwargs = {'first_name': 'Bobby', 'last_name': 'Smith'}
>>> print_keyword_args(**kwargs)
first_name = Bobby
last_name = Smith
func(**{'type':'Event'})
is equivalent to
func(type='Event')

http://stackoverflow.com/questions/988228/converting-a-string-to-dictionary
How to convert a string to dict?
Starting in Python 2.6 you can use the built-in ast.literal_eval:
>>> import ast
>>> ast.literal_eval("{'muffin' : 'lolz', 'foo' : 'kitty'}")
{'muffin': 'lolz', 'foo': 'kitty'}
This is safer than using eval. As its own docs say:
>>> help(ast.literal_eval)
Help on function literal_eval in module ast:

literal_eval(node_or_string)
    Safely evaluate an expression node or a string containing a Python
    expression.  The string or node provided may only consist of the following
    Python literal structures: strings, numbers, tuples, lists, dicts, booleans,
    and None.






Tuesday, January 26, 2016

[Neutron] The Neutron Networking trace records

This article maybe is not given too much explanation or description about the script and result is because it is my trace record for my study from my OpenStack environment, which is based on the following references:
https://www.rdoproject.org/networking/networking-in-too-much-detail/
https://blogs.oracle.com/ronen/entry/diving_into_openstack_network_architecture1
https://www.gitbook.com/book/yeasy/openstack_understand_neutron/details

This picture is quite important because it tells the neutron networking architecture clearly.


So, I have a VM/instance, which is "mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4" and we can get

> nova --os-tenant-name danny list
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------------------------------+
| ID                                   | Name                                                  | Status | Task State | Power State | Networks                                   |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------------------------------+
| aa2f621c-65e0-4d89-bb6d-c66054ee9250 | mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4 | ACTIVE | -          | Running     | default_network=192.168.100.1, 10.14.1.154 |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------------------------------+

The instance id is aa2f621c-65e0-4d89-bb6d-c66054ee9250

root@node-6:~# nova show aa2f621c-65e0-4d89-bb6d-c66054ee9250
+--------------------------------------+---------------------------------------------------------------+
| Property                             | Value                                                         |
+--------------------------------------+---------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                          |
| OS-EXT-SRV-ATTR:host                 | node-5.domain.tld                                             |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | node-5.domain.tld                                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000630                                             |
| OS-EXT-STS:power_state               | 1                                                             |
| OS-EXT-STS:task_state                | -                                                             |
| OS-EXT-STS:vm_state                  | active                                                        |
| OS-SRV-USG:launched_at               | 2016-01-25T06:17:13.000000                                    |
| OS-SRV-USG:terminated_at             | -                                                             |
| accessIPv4                           |                                                               |
| accessIPv6                           |                                                               |
| config_drive                         |                                                               |
| created                              | 2016-01-25T06:17:05Z                                          |
| default_network network              | 192.168.100.1, 10.14.1.154                                    |
| flavor                               | 1core2GBmemory20GBdisk (06d3aafd-8819-4034-a20d-2a4d2340bae0) |
| hostId                               | 8deeb163a8215e6d14e89b44189791aa699211a697d625b85a82334a      |
| id                                   | aa2f621c-65e0-4d89-bb6d-c66054ee9250                          |
| image                                | Ubuntu14.04-2015.3.0 (0d95c5a8-ea4e-45a4-ba79-6b1c8f2acf46)   |
| key_name                             | 2d8ed97974a34782ba3c1eda2cc1f705                              |
| metadata                             | {}                                                            |
| name                                 | mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4         |
| os-extended-volumes:volumes_attached | []                                                            |
| progress                             | 0                                                             |
| security_groups                      | default_sg                                                    |
| status                               | ACTIVE                                                        |
| tenant_id                            | fc48558ea8684d14a1da30f6c5028064                              |
| updated                              | 2016-01-25T06:24:45Z                                          |
| user_id                              | 2d8ed97974a34782ba3c1eda2cc1f705                              |
+--------------------------------------+---------------------------------------------------------------+


$ nova-manage vm list | grep mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4
mi--lbod6yoctnm3-0-jbcgnagn4dck-instance-xvti2docwkl4 node-5.domain.tld 1core2GBmemory20GBdisk active     2016-01-25 06:17:13        0d95c5a8-ea4e-45a4-ba79-6b1c8f2acf46                     fc48558ea8684d14a1da30f6c5028064 2d8ed97974a34782ba3c1eda2cc1f705 nova       0

From now on, we can go through the neutron networking on Compute and Neutron/Service host

Compute Node:

Find the related interface and bridge that belongs to our VM

go to instance folder
$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# grep -i tap libvirt.xml
      <target dev="tape7b56cc3-8f"/>

$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# iptables -S | grep tape7b56cc3-8f
-A neutron-openvswi-FORWARD -m physdev --physdev-out tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-sg-chain
-A neutron-openvswi-FORWARD -m physdev --physdev-in tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-sg-chain
-A neutron-openvswi-INPUT -m physdev --physdev-in tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-oe7b56cc3-8
-A neutron-openvswi-sg-chain -m physdev --physdev-out tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-ie7b56cc3-8
-A neutron-openvswi-sg-chain -m physdev --physdev-in tape7b56cc3-8f --physdev-is-bridged -j neutron-openvswi-oe7b56cc3-8

Apply the security group in Linux Bridge
$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# iptables -L neutron-openvswi-sg-chain | grep tape7b56cc3-8f
neutron-openvswi-ie7b56cc3-8  all  --  anywhere             anywhere             PHYSDEV match --physdev-out tape7b56cc3-8f --physdev-is-bridged
neutron-openvswi-oe7b56cc3-8  all  --  anywhere             anywhere             PHYSDEV match --physdev-in tape7b56cc3-8f --physdev-is-bridged


root@node-5:~# brctl show
....
qbre7b56cc3-8f          8000.1ab45f3f1100       no              qvbe7b56cc3-8f
                                                        tape7b56cc3-8f
....


$ ovs-vsctl show
    ....
        Port "qvoe7b56cc3-8f"
            tag: 93
            Interface "qvoe7b56cc3-8f"
    Bridge br-prv
        Port br-prv
            Interface br-prv
                type: internal
        Port "p_br-prv-0"
            Interface "p_br-prv-0"
                type: internal
        Port phy-br-prv
            Interface phy-br-prv
                type: patch
                options: {peer=int-br-prv}
    Bridge br-floating
        Port "p_br-floating-0"
            Interface "p_br-floating-0"
                type: internal
        Port br-floating
            Interface br-floating
                type: internal
    ovs_version: "2.3.1"


or here is another approach:

Use virsh command:
# virsh list
# virsh domiflist <your instance>

Screen Shot 2016-03-18 at 2.02.49 PM.png

Find the bridge that VM's interface is connected
# brctl show qbr10257204-b0
Screen Shot 2016-03-18 at 2.04.35 PM.png

Find veth pairs
# ethtool -S qvb10257204-b0
# ip link list |grep ‘41: ‘
Screen Shot 2016-03-18 at 2.08.01 PM.png

Screen Shot 2016-03-18 at 2.09.36 PM.png

OVS flow tables

$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# ovs-ofctl dump-flows br-floating
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=2944114.894s, table=0, n_packets=5330801, n_bytes=454517092, idle_age=0, hard_age=65534, priority=0 actions=NORMAL

$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# ovs-ofctl dump-flows br-prv
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=2943385.989s, table=0, n_packets=54475876, n_bytes=21932146773, idle_age=0, hard_age=65534, priority=1 actions=NORMAL
 cookie=0x0, duration=1212275.443s, table=0, n_packets=119001, n_bytes=9280541, idle_age=1, hard_age=65534, priority=4,in_port=2,dl_vlan=53 actions=mod_vlan_vid:289,NORMAL
 cookie=0x0, duration=2939213.555s, table=0, n_packets=40203246, n_bytes=7594046533, idle_age=0, hard_age=65534, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:400,NORMAL
 cookie=0x0, duration=443296.428s, table=0, n_packets=17855, n_bytes=1651227, idle_age=27, hard_age=65534, priority=4,in_port=2,dl_vlan=78 actions=mod_vlan_vid:388,NORMAL
 cookie=0x0, duration=440574.253s, table=0, n_packets=90232, n_bytes=6655951, idle_age=14, hard_age=65534, priority=4,in_port=2,dl_vlan=81 actions=mod_vlan_vid:361,NORMAL
 cookie=0x0, duration=430464.292s, table=0, n_packets=97628, n_bytes=7077738, idle_age=1, hard_age=65534, priority=4,in_port=2,dl_vlan=87 actions=mod_vlan_vid:309,NORMAL
 cookie=0x0, duration=286613.370s, table=0, n_packets=74230, n_bytes=5428601, idle_age=1, hard_age=65534, priority=4,in_port=2,dl_vlan=91 actions=mod_vlan_vid:321,NORMAL
 cookie=0x0, duration=188.222s, table=0, n_packets=276, n_bytes=25628, idle_age=0, priority=4,in_port=2,dl_vlan=97 actions=mod_vlan_vid:374,NORMAL
 cookie=0x0, duration=8355.066s, table=0, n_packets=2228, n_bytes=178873, idle_age=85, priority=4,in_port=2,dl_vlan=93 actions=mod_vlan_vid:255,NORMAL
 cookie=0x0, duration=2869178.234s, table=0, n_packets=1063994, n_bytes=83782069, idle_age=19, hard_age=65534, priority=4,in_port=2,dl_vlan=2 actions=mod_vlan_vid:206,NORMAL
 cookie=0x0, duration=1148558.787s, table=0, n_packets=41929, n_bytes=4005468, idle_age=14, hard_age=65534, priority=4,in_port=2,dl_vlan=55 actions=mod_vlan_vid:295,NORMAL
 cookie=0x0, duration=436450.974s, table=0, n_packets=268258, n_bytes=26273333, idle_age=0, hard_age=65534, priority=4,in_port=2,dl_vlan=86 actions=mod_vlan_vid:305,NORMAL
 cookie=0x0, duration=1134136.903s, table=0, n_packets=41386, n_bytes=3955238, idle_age=107, hard_age=65534, priority=4,in_port=2,dl_vlan=57 actions=mod_vlan_vid:274,NORMAL
 cookie=0x0, duration=349629.927s, table=0, n_packets=12556, n_bytes=1235546, idle_age=4, hard_age=65534, priority=4,in_port=2,dl_vlan=88 actions=mod_vlan_vid:325,NORMAL
 cookie=0x0, duration=956159.621s, table=0, n_packets=1414006, n_bytes=2522828496, idle_age=6, hard_age=65534, priority=4,in_port=2,dl_vlan=68 actions=mod_vlan_vid:383,NORMAL
 cookie=0x0, duration=444184.489s, table=0, n_packets=338, n_bytes=37368, idle_age=65534, hard_age=65534, priority=4,in_port=2,dl_vlan=76 actions=mod_vlan_vid:212,NORMAL
 cookie=0x0, duration=341208.593s, table=0, n_packets=4535, n_bytes=806468, idle_age=104, hard_age=65534, priority=4,in_port=2,dl_vlan=89 actions=mod_vlan_vid:307,NORMAL
 cookie=0x0, duration=1046690.998s, table=0, n_packets=341, n_bytes=37606, idle_age=65534, hard_age=65534, priority=4,in_port=2,dl_vlan=63 actions=mod_vlan_vid:247,NORMAL
 cookie=0x0, duration=1469822.661s, table=0, n_packets=54812, n_bytes=5159092, idle_age=83, hard_age=65534, priority=4,in_port=2,dl_vlan=47 actions=mod_vlan_vid:324,NORMAL
 cookie=0x0, duration=609252.595s, table=0, n_packets=473, n_bytes=403134, idle_age=65534, hard_age=65534, priority=4,in_port=2,dl_vlan=72 actions=mod_vlan_vid:277,NORMAL
 cookie=0x0, duration=2943385.550s, table=0, n_packets=5948, n_bytes=493768, idle_age=189, hard_age=65534, priority=2,in_port=2 actions=drop

$ root@node-5:/var/lib/nova/instances/aa2f621c-65e0-4d89-bb6d-c66054ee9250# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=2943396.106s, table=0, n_packets=64104253, n_bytes=14989605175, idle_age=0, hard_age=65534, priority=1 actions=NORMAL
 cookie=0x0, duration=1134146.751s, table=0, n_packets=51173, n_bytes=39495387, idle_age=1, hard_age=65534, priority=3,in_port=1,dl_vlan=274 actions=mod_vlan_vid:57,NORMAL
 cookie=0x0, duration=443306.272s, table=0, n_packets=22067, n_bytes=18326623, idle_age=37, hard_age=65534, priority=3,in_port=1,dl_vlan=388 actions=mod_vlan_vid:78,NORMAL
 cookie=0x0, duration=1046700.841s, table=0, n_packets=244, n_bytes=37459, idle_age=65534, hard_age=65534, priority=3,in_port=1,dl_vlan=247 actions=mod_vlan_vid:63,NORMAL
 cookie=0x0, duration=8364.913s, table=0, n_packets=2939, n_bytes=3336868, idle_age=9, priority=3,in_port=1,dl_vlan=255 actions=mod_vlan_vid:93,NORMAL
 cookie=0x0, duration=2869188.084s, table=0, n_packets=1110068, n_bytes=287840569, idle_age=1, hard_age=65534, priority=3,in_port=1,dl_vlan=206 actions=mod_vlan_vid:2,NORMAL
 cookie=0x0, duration=1148568.632s, table=0, n_packets=51652, n_bytes=39880915, idle_age=24, hard_age=65534, priority=3,in_port=1,dl_vlan=295 actions=mod_vlan_vid:55,NORMAL
 cookie=0x0, duration=609262.437s, table=0, n_packets=4995, n_bytes=345118, idle_age=14, hard_age=65534, priority=3,in_port=1,dl_vlan=277 actions=mod_vlan_vid:72,NORMAL
 cookie=0x0, duration=1212285.287s, table=0, n_packets=127910, n_bytes=47848009, idle_age=5, hard_age=65534, priority=3,in_port=1,dl_vlan=289 actions=mod_vlan_vid:53,NORMAL
 cookie=0x0, duration=1469832.507s, table=0, n_packets=66919, n_bytes=50958284, idle_age=93, hard_age=65534, priority=3,in_port=1,dl_vlan=324 actions=mod_vlan_vid:47,NORMAL
 cookie=0x0, duration=430474.139s, table=0, n_packets=101531, n_bytes=22514877, idle_age=8, hard_age=65534, priority=3,in_port=1,dl_vlan=309 actions=mod_vlan_vid:87,NORMAL
 cookie=0x0, duration=2939223.400s, table=0, n_packets=39197886, n_bytes=15878291311, idle_age=1, hard_age=65534, priority=3,in_port=1,dl_vlan=400 actions=mod_vlan_vid:1,NORMAL
 cookie=0x0, duration=440584.097s, table=0, n_packets=94708, n_bytes=24882334, idle_age=1, hard_age=65534, priority=3,in_port=1,dl_vlan=361 actions=mod_vlan_vid:81,NORMAL
 cookie=0x0, duration=349639.771s, table=0, n_packets=15123, n_bytes=10862299, idle_age=14, hard_age=65534, priority=3,in_port=1,dl_vlan=325 actions=mod_vlan_vid:88,NORMAL
 cookie=0x0, duration=341218.441s, table=0, n_packets=4854, n_bytes=1123080, idle_age=113, hard_age=65534, priority=3,in_port=1,dl_vlan=307 actions=mod_vlan_vid:89,NORMAL
 cookie=0x0, duration=956169.465s, table=0, n_packets=1976310, n_bytes=195788064, idle_age=2, hard_age=65534, priority=3,in_port=1,dl_vlan=383 actions=mod_vlan_vid:68,NORMAL
 cookie=0x0, duration=444194.333s, table=0, n_packets=251, n_bytes=37638, idle_age=65534, hard_age=65534, priority=3,in_port=1,dl_vlan=212 actions=mod_vlan_vid:76,NORMAL
 cookie=0x0, duration=436460.818s, table=0, n_packets=448207, n_bytes=582425969, idle_age=0, hard_age=65534, priority=3,in_port=1,dl_vlan=305 actions=mod_vlan_vid:86,NORMAL
 cookie=0x0, duration=198.065s, table=0, n_packets=71, n_bytes=13149, idle_age=4, priority=3,in_port=1,dl_vlan=374 actions=mod_vlan_vid:97,NORMAL
 cookie=0x0, duration=286623.215s, table=0, n_packets=76707, n_bytes=17851968, idle_age=6, hard_age=65534, priority=3,in_port=1,dl_vlan=321 actions=mod_vlan_vid:91,NORMAL
 cookie=0x0, duration=2943395.505s, table=0, n_packets=125949, n_bytes=9745756, idle_age=17, hard_age=65534, priority=2,in_port=1 actions=drop
 cookie=0x0, duration=2943396.052s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop

VLAN Translation

root@node-5:~# ovs-ofctl dump-flows br-int | grep 255
 cookie=0x0, duration=97049.752s, table=0, n_packets=7155, n_bytes=6796405, idle_age=61, hard_age=65534, priority=3,in_port=1,dl_vlan=255 actions=mod_vlan_vid:93,NORMAL

From the external vlan (segmentation id ) 255 to internal vlan 93

root@node-5:~# ovs-ofctl dump-flows br-prv | grep 255
 cookie=0x0, duration=97125.859s, table=0, n_packets=5278, n_bytes=475415, idle_age=40, hard_age=65534, priority=4,in_port=2,dl_vlan=93 actions=mod_vlan_vid:255,NORMAL

From the internal vlan 93 to external vlan (segmentation id ) 255

P.S: how to get vlan id please refer to the section "To find the vlan id for my tenant/group"  below


Use command to show of-port number:
ovs-ofctl show <ovs-bridge>

Use command to show vlan-tag on port:
ovs-vsctl show

Network Node:

Router List

root@node-6: ~# neutron --OS-tenant-name Danny router-list
+--------------------------------------+-------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
id                                   | name                                      | external_gateway_info                                                                                                                                                                   |
+--------------------------------------+-------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 6a222d9d-71da-4db6-891b-87d4b6ee8536 | default_network-admin-router-rcg7xvxhll2i | {"network_id": "7cd5fc6c-e47a-420c-8d15-aa51747564d8", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "8e2aa50c-cd0e-4596-97fb-dfc1ecc63245", "ip_address": "10.14.1.153"}]} |
+--------------------------------------+-------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+


root@node-6:~# ip net exec qrouter-6a222d9d-71da-4db6-891b-87d4b6ee8536 ip route
default via 10.14.0.254 dev qg-7853deb4-04
10.14.0.0/16 dev qg-7853deb4-04  proto kernel  scope link  src 10.14.1.153
192.168.100.0/24 dev qr-2d7bdc99-90  proto kernel  scope link  src 192.168.100.254


root@node-6:~# ip net exec qrouter-6a222d9d-71da-4db6-891b-87d4b6ee8536 iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat
-N neutron-l3-agent-snat
-N neutron-postrouting-bottom
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-OUTPUT -d 10.14.1.154/32 -j DNAT --to-destination 192.168.100.1
-A neutron-l3-agent-POSTROUTING ! -i qg-7853deb4-04 ! -o qg-7853deb4-04 -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8775
-A neutron-l3-agent-PREROUTING -d 10.14.1.154/32 -j DNAT --to-destination 192.168.100.1
-A neutron-l3-agent-float-snat -s 192.168.100.1/32 -j SNAT --to-source 10.14.1.154
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -s 192.168.100.0/24 -j SNAT --to-source 10.14.1.153
-A neutron-postrouting-bottom -j neutron-l3-agent-snat

P.S: The VM's floating IP is 10.14.1.154

Network List

root@node-6:~# neutron --os-tenant-name danny net-list
+--------------------------------------+-----------------+-------------------------------------------------------+
id                                   | name            | subnets                                               |
+--------------------------------------+-----------------+-------------------------------------------------------+
| 16b6d042-ce8b-4020-82e0-86a6829a3978 | default_network | 7d5eafc1-2c10-410d-a8e0-1dc648969fcd 192.168.100.0/24 |
| 7cd5fc6c-e47a-420c-8d15-aa51747564d8 | net04_ext       | 8e2aa50c-cd0e-4596-97fb-dfc1ecc63245                  |
+--------------------------------------+-----------------+-------------------------------------------------------+


Responsible network node for DHCP and L3

You can obtain this now that you’ve got the network’s UUID by doing the following:
root@node-6:~# neutron dhcp-agent-list-hosting-net 16b6d042-ce8b-4020-82e0-86a6829a3978
+--------------------------------------+-------------------+----------------+-------+
| id                                   | host              | admin_state_up | alive |
+--------------------------------------+-------------------+----------------+-------+
| 34263976-2f1b-48bd-a40e-eb6f0e77c5f4 | node-6.domain.tld | True           | :-)   |
+--------------------------------------+-------------------+----------------+-------+

To find the vlan id for my tenant/group

root@node-6:~# neutron net-show -F provider:segmentation_id 16b6d042-ce8b-4020-82e0-86a6829a3978
+--------------------------+-------+
| Field                    | Value |
+--------------------------+-------+
| provider:segmentation_id | 255   |
+--------------------------+-------+

VLAN Translation

root@node-6:~# ovs-ofctl dump-flows br-prv | grep 255
 cookie=0x0, duration=97539.474s, table=0, n_packets=7188, n_bytes=6770236, idle_age=65, hard_age=65534, priority=4,in_port=2,dl_vlan=330 actions=mod_vlan_vid:255,NORMAL

root@node-6:~# ovs-ofctl dump-flows br-int | grep 255
 cookie=0x0, duration=97562.780s, table=0, n_packets=5264, n_bytes=516743, idle_age=89, hard_age=65534, priority=3,in_port=1,dl_vlan=255 actions=mod_vlan_vid:330,NORMAL

To find the dhcp namesapce

root@node-6:~# ip netns | grep 16b6d042-ce8b-4020-82e0-86a6829a3978
qdhcp-16b6d042-ce8b-4020-82e0-86a6829a3978

root@node-6:~# ps aux | grep 16b6d042-ce8b-4020-82e0-86a6829a3978
nobody   11958  0.0  0.0  28204  1048 ?        S    Jan25   0:00 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap74d4769b-5b --except-interface=lo --pid-file=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/host --addn-hosts=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/opts --dhcp-leasefile=/var/lib/neutron/dhcp/16b6d042-ce8b-4020-82e0-86a6829a3978/leases --dhcp-range=set:tag0,192.168.100.0,static,600s --dhcp-lease-max=256 --conf-file=/etc/neutron/dnsmasq-neutron.conf --domain=openstacklocal


Subnet List

root@node-6:~# neutron --os-tenant-name danny subnet-list
+--------------------------------------+---------------------------------------------------+------------------+------------------------------------------------------+
| id                                   | name                                              | cidr             | allocation_pools                                     |
+--------------------------------------+---------------------------------------------------+------------------+------------------------------------------------------+
| 7d5eafc1-2c10-410d-a8e0-1dc648969fcd | default_network-admin-private_subnet-kvrtht6fwqhr | 192.168.100.0/24 | {"start": "192.168.100.1", "end": "192.168.100.253"} |
+--------------------------------------+---------------------------------------------------+------------------+------------------------------------------------------+

Port List

root@node-6:~# neutron --os-tenant-name danny port-list
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                              |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| 2d7bdc99-9053-448f-9f4a-e71ba8450ac2 |      | fa:16:3e:da:12:25 | {"subnet_id": "7d5eafc1-2c10-410d-a8e0-1dc648969fcd", "ip_address": "192.168.100.254"} |
| 74d4769b-5b7e-4523-ba34-64672d4ac8f1 |      | fa:16:3e:90:bd:88 | {"subnet_id": "7d5eafc1-2c10-410d-a8e0-1dc648969fcd", "ip_address": "192.168.100.2"}   |
| e7b56cc3-8fcd-4a8b-bd00-79b7a625acdc |      | fa:16:3e:d4:88:53 | {"subnet_id": "7d5eafc1-2c10-410d-a8e0-1dc648969fcd", "ip_address": "192.168.100.1"}   |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+

Show Port of DHCP Service

root@node-6:~# neutron --os-tenant-name danny port-show 74d4769b-5b7e-4523-ba34-64672d4ac8f1
+-----------------------+--------------------------------------------------------------------------------------+
| Field                 | Value                                                                                |
+-----------------------+--------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                 |
| allowed_address_pairs |                                                                                      |
| binding:vnic_type     | normal                                                                               |
| device_id             | dhcp7a15cee0-2af1-5441-b1dc-94897ef4dee9-16b6d042-ce8b-4020-82e0-86a6829a3978        |
| device_owner          | network:dhcp                                                                         |
| extra_dhcp_opts       |                                                                                      |
| fixed_ips             | {"subnet_id": "7d5eafc1-2c10-410d-a8e0-1dc648969fcd", "ip_address": "192.168.100.2"} |
| id                    | 74d4769b-5b7e-4523-ba34-64672d4ac8f1                                                 |
| mac_address           | fa:16:3e:90:bd:88                                                                    |
| name                  |                                                                                      |
| network_id            | 16b6d042-ce8b-4020-82e0-86a6829a3978                                                 |
| security_groups       |                                                                                      |
| status                | ACTIVE                                                                               |
| tenant_id             | fc48558ea8684d14a1da30f6c5028064                                                     |
+-----------------------+--------------------------------------------------------------------------------------+


Reference:
Iptables tables and chains