OpenStack – Doing the Neutron Dance

I finally got OpenStack’s networking working – the hard way. It’s been Educational.

First, because the IP address of the controller system had been changed (hint, changing the config files is just Step 1. You also have to change the URLs in the “services” table of the database). Then, to get the cloud VMs and the outside world talking to each other.

Clouds aren’t simply VMs that can be tossed freely between physical hosts, they’re complete virtual data centers, including networking. So one of the most significant (and troublesome) services that OpenStack offers is Virtual Networking. Get this wrong and not only are your VMs cut off from the world, you may encounted the dreaded “No Suitable Host” error message when (unsucessfully) attempting to launch a compute instance.

There are 2 different OpenStack networking systems. The older one is nova networking, the newer one is neutron. Nova networking is comparatively easy to work with, but it’s not as flexible. This article is about Neutron on Centos 6/RHEL 6, Using IceHouse RDO as installed via Packstack. Juno is the “current” release, but that requires CentOS/RHEL version 7 and I had enough grief getting this going without having to use a Magic Decoder Ring just to read logfiles.

In a nutshell

Logical Networks

In OpenStack Neutron, there are 3 logical networks defined by 3 standard OVS bridges:

  1. br-tun, which is for internal traffic
  2. br-int, which is the integration bridge, not the “internal” bridge
  3. br-ex, which is the external interfacing bridge.

A useful diagram (and some diagnostic aids) are to be found here: http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html

Physical Networks

The physical networks are known as provider networks. The primary link between the cloud network and the actual physical infrastructure is via br-ex.

Configuration

Neutron is predicated on doing much of its work via plugins. The 2 most important plugins for basic networking are the ML2 and OVS (Open VirtualSwitch) plugins. Much of the actual configuration of the logical networks can be done via the API/web management app (Horizon), but certain things are setup in the config files for ML2 and OVS. Most notable are the physical network names, the bridge names, and the bridge-NIC interconnections. More later.

Physical NICs and OVS Bridges

One of the areas where available documents have been very confusing is where the physical NICs plug into the OpenStack logical bridges. It turns out that there are 2 different ways to do this. One is to have a physical NIC bridge – for example br-eth1. The other is to simply add the NIC directly to the br-ex bridge via the OVS ovs-vsctl command.

Which is better? For a single-NIC connection to the external world, I find it easier/simpler to add my “eth0” to the br-ex bridge. On the other hand, it’s common for enterprise servers to have 2 NICs standard, allowing one for the Internet and one for the backend infrastructure, for example. In this case, adding a br-eth1 and binding the eth1 NIC to it makes sense.

Since the single-NIC approach is relatively undocumented, I list in detail here.

Other Networks

A physical network is necessary for interfacing your cloud to the outside world, be it Internet or just the local data center. But one of the distinctive features of a cloud as opposed to just a bunch of VMs is the ability to group stuff together. This is done by assigning resources to tenants. A tenant can be a corporate division or department if you’re doing an in-house corporate cloud or it can be a client if you’re an ISP selling cloud services. Each tenant exists in isolation from the other tenants except where resources have been made shareable.

There’s only one Internet (practically speaking), so it can be mapped onto a flat network and shared between tenants. However, to maintain network isolation between tenants, you need a more complex network infrastructure. Something that not only isolates traffic, but actually allows the IPs within a tenant’s network to be independent of and invisible to any other tenant’s. In short, 192.168.201.2 for tenant A and 192.168.201.2 for tenant B have nothing in common except for the number.

Among the options for multi-tenant isolated networks are:

  • GRE – Cisco’s protocol for routing independent channels.
  • VLAN – The traditional hardware-supported channel separation mechanism
  • VXLAN – VLAN – eXtended. VLANs require physical hardware support and allow only 4096 independent VLANs. VXLAN supports many, many more virtual LANs, so it’s suitable for a setup like Amazon where there are more than 4096 distinct tenants.

Most of the examples given for OpenStack are either GRE or VXLAN-based and some have both. Whichever one(s) you choose to use are up to you. It’s virtual, so if eventually a different infrastructure proves more suitable it should be relatively easy to switch over.

Selecting and configuring a network infrastructure

You can and often will have multiple network infrastructures. For example, a flat infrastructure for the physical network and GRE or VXLAN for the tenant networks.

Setting up multiple infrastructures is fairly easy as long as you know where the major points are. They’re in the ML2 and OVS plugins.

ML2 configuration

First and foremost, in order to use an infrastructure, you have to ensure that its associated drivers are loaded. That’s done in the /etc/neutron/plugins/ml2/ml2_conf.ini file:

[ml2]
type_drivers = flat,gre,vxlan
tenant_network_types = flat,gre,vxlan

Note that the driver and tenant selections are both required. One to make the drivers available, the other to determine which ones are allowed for tenants. I’ve just duplicated the selections, no harm done, but probably I shouldn’t have made “flat” a tenant option. The tenant network type “local” is an option for something like DevStack where the network is all contained on one host.

If you select a driver, you need to define its options. For GRE:

[ml2_type_gre]
# (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating
# ranges of GRE tunnel IDs that are available for tenant network allocation
tunnel_id_ranges = 1:1000

Because GRE uses a tunnel ID to define a tenant network.

For VXLAN:

[ml2_type_vxlan]
# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
vni_ranges =10:100

A VNI is the VXLAN equivalent of the VLAN network ID or GRE tunnel ID.

If you have a flat physical network, you’ll define an ml2_type_flat section that defines/restricts the physical network names you can assign.

OVS Configuration

Once the basic network infrastructure definitions are in place, configure OVS to use them. That’s done in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

The first thing to do is decide what tenant tunnel infrastructure you want. Choose “local” for single-box tunnelling such as DevStack. Choose “vlan”, “gre” or “vxlan” to select one of the multi-box protocols. For example:

tenant_network_type = gre

This selects GRE as the tunnel protocol.

Having done that, add infrastructure-specific options to the OVS configuration. For example:

[OVS]
tenant_network_type = gre
local_ip=10.0.0.169
enable_tunneling=True
integration_bridge=br-int
tunnel_bridge=br-tun
tunnel_id_ranges=1:1000
bridge_mappings=physnet1:br-ex

This is for GRE tunnelling with a flat physical LAN and the physical NIC attached directly to the br-ex bridge, instead of defining a separate br-eth0 or -eth1 bridge just for the NIC and setting up a virtual link from br-eth0 to br-ex. The local IP (10.0.0.169) is simply the physical NIC’s ethernet IP address for this box.

Tools

There are various tools that can be used to configure the cloud and provider networks. Among these are:

Setting UP IP ranges and gateways.

  • traditional network utilities (ping, traceroute, ifconfig, route)
  • iproute2 utilities. While the “ifconfig” and “route” commands are sufficient for quick-and-dirty ops, the “ip” commands support 2 critical services not in the traditional Linux networking: virtual paired devices and network namespaces
  • neutron logs and utilities. In particular, ovs-vsctl is your friend.

I’ve found the “ping -R” command to be useful in showing me what the internal routing was.  It’s also good to learn about network namespaces, since they help implement tunnel separation.

There’s no simple command to figure out virtual device pairs, but a quick web search will show some ways of combining commands to find out what’s connected to what. In a properly operating OpenVStack virtual network, the device and router names are generated in a way that makes the connections obvious, but there’s no substitute for actually seeing the pairing.