Wednesday, July 29, 2015

[Python] Effective Python

Two weeks ago, my friend discussed the best practice for Django and Python in programming.
After that for a while, one day I just came up with a idea: I read a book "Effective C++" and why not there is a book about "Effective Python"? So, here it is: http://www.effectivepython.com/
I wish I can find some time to walk through this book.

Monday, July 27, 2015

[Neutron] Slow network speed between VM and external

Recently I encountered a situation of the VM's network performance that is pretty low. For instance, if upload a image to Glance via my VM to run glance cli command, the actual transmit speed is around the following data:
RX: 130 Kbps
TX: 150 Kbps

My OpenStack's network service is with GRE segmentation on Neutron. Why? After google the web, I found some people had the same issues as I had:

They talked about the approach on the VM to solve the issue and this works
> ethtool -K eth0 gro off
> ethtool -K eth0 tso off

But, for my case I think it is not solve from the root cause so that I take time to do the research for TCP segmentation offload and GRE. Finally I find the root cause:
MTU need to adjust when using GRE network service.
http://docs.openstack.org/juno/install-guide/install/yum/content/neutron-network-node.html
Quote:
"Tunneling protocols such as GRE include additional packet headers that increase overhead and decrease space available for the payload or user data. Without knowledge of the virtual network infrastructure, instances attempt to send packets using the default Ethernet maximum transmission unit (MTU) of 1500 bytes.Internet protocol (IP) networks contain the path MTU discovery (PMTUD) mechanism to detect end-to-end MTU and adjust packet size accordingly. However, some operating systems and networks block or otherwise lack support for PMTUD causing performance degradation or connectivity failure.
Ideally, you can prevent these problems by enabling jumbo frames on the physical network that contains your tenant virtual networks. Jumbo frames support MTUs up to approximately 9000 bytes which negates the impact of GRE overhead on virtual networks. However, many network devices lack support for jumbo frames and OpenStack administrators often lack control over network infrastructure. Given the latter complications, you can also prevent MTU problems by reducing the instance MTU to account for GRE overhead. Determining the proper MTU value often takes experimentation, but 1454 bytes works in most environments. You can configure the DHCP server that assigns IP addresses to your instances to also adjust the MTU."

After doing the following setting on Controller node, the transmit speed is up to:
RX: 31323 Kbps
TX: 31464 Kbps
  1. Edit the /etc/neutron/dhcp_agent.ini file and complete the following action:
    1. In the [DEFAULT] section, enable the dnsmasq configuration file:
      Select Text
      1
      2
      3
      [DEFAULT]
      ...
      dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
  2. Create and edit the /etc/neutron/dnsmasq-neutron.conf file and complete the following action:
    1. Enable the DHCP MTU option (26) and configure it to 1454 bytes:
      Select Text
      1
      dhcp-option-force=26,1454
  3. Kill any existing dnsmasq processes:
    # pkill dnsmasq

So, the network performance is improved 200 times.
Wow...



Reference:
TCP segmentation offload
https://forum.ivorde.com/linux-tso-tcp-segmentation-offload-what-it-means-and-how-to-enable-disable-it-t19721.html
TCP in Linux Kernel
http://vger.kernel.org/~davem/tcp_output.html
Understanding TCP Segmentation Offload (TSO) and Large Receive Offload (LRO) in a VMware environmenthttps://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2055140