Hi folks, it’s been a few weeks since I posted anything. It’s been busy for me as I’m in the middle of the biggest project that I’ve ever lead. It’s got huge implications, and while I’ve dropped some not so vague hints, I want to show you one tiny aspect of it today:
Effective use of 10GbE using VLANs and Channel Bonding
Imagine a brand new interstate highway with a speed limit of 120mph and no painted lanes. This is what a 10GbE pipe is like without VLANs to segregate different traffic types like management, storage, internet access, private application, or VDI. It would be a huge mess, difficult to manage, and most likely insecure.
Now imagine that same new interstate with strictly enforced lanes for high speed commuters, Sunday drivers, interstate commerce, and other assorted road hogs. You get the idea. Makes for a better use of a freeway when tractor trailers and slow drivers aren’t taking up the fast lane and speed freaks aren’t weaving in and out on their import motorcycle.
Channel Bonding Primer
In Linux, channel bonding is a means of abstracting (virtualizing!) network configurations from a physical NICs for the purposes of fault tolerance, improved throughput, or both. And while there are half a dozen or so different channel bonding, we’re going to focus on 2 modes only: round-robin, mode 1 active-passive and mode 4 or aggregate (802.3ad/LACP).
Simply put, mode 1 takes 2 or more physical NICs and “bonds” them into a single logical interface. In mode 1, only 1 link is “active” and the other link(s) are there to take over in case of failure. It’s easy because the network switches don’t need any special configuration.
Like mode 1, mode 4 takes 2 or more physical NICs and “bonds” them into a single logical interface. However, it does so using 802.3ad or Link Aggregation Control Protocol (LACP), which combines the links for a fatter pipe and all links are active. So 2 1GB interfaces become a single 2GB link, and 2 10GbE links become a single 20GbE link. It just takes a little coordination with the networking folks as the network switch ports need to have LACP enabled on them.
VLAN Primer (With Great Power Comes Great Responsibility)
Holy crap, a 20GbE link??? That’s where the VLANs and ‘great responsibility ‘ come in to play. In its simplest terms, a VLAN is a means of carving a single layer 2 broadcast domain into multiple distinct (isolated!!) broadcast domains. In other words, our 20GbE LACP channel bond can then be carved into multiple VLANs, keeping our different traffic types separate and thereby making the best use of our resources. And adding a little bit of security to boot because of the isolation.
Virtual Bridge Primer
A virtual bridge is simply a means of presenting a network device to a virtual machine. That device could be a physical interface, like “eth0”, or it could be a logical device like “bond0” (a channel bond) or “bond0.3080” (a VLAN).
Visualizing the Concepts
If you’re unfamiliar or relatively inexperienced with Channel Bonding, it’s sometimes easier to visualize what it is that you’re trying to accomplish first. This includes choosing the type of channel bond as well as how many VLANs you will need to start things off with. Here’s a diagram to help out:
“Link 1” and “Link 2” above are the fibre cables that plug into the dual port 10GbE interface represented by “NIC 1” and “NIC 2”. They are then configured as a single channel bond, “bond0”. That bond essentially represents a 20GbE link, so we carve it into 3 VLANs that we immediately turn into virtual bridges.
Now let’s go configure this!
Configure Channel Bonding and 10GbE in RHEL+KVM
1. First we prep our NICs, eth0 and eth1:
[root@infra-host-1 network-scripts]# cat ifcfg-eth0 DEVICE="eth0" BOOTPROTO="none" HWADDR="00:00:00:AA:0A:0F" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" UUID="3f466c1b-a9cc-4bad-8729-4582e597dcc9" SLAVE=yes MASTER=bond0 MTU=9000
Notice that we added “MASTER” and “SLAVE” variables to the configuration files, and also configured Jumbo Frames. We do the same configuration change to “eth1”.
2. Next, we create our channel bond, bond0:
[root@infra-host-1 network-scripts]# cat ifcfg-bond0 DEVICE=bond0 ONBOOT=yes BOOTPROTO=none BONDING_OPTS="mode=4 miimon=100" MTU=9000
This is also where we dictate the channel bond type.
3. Next, we create the VLAN (remember this has to match the actual network), that also serves as the basis for our virtual bridge:
[root@infra-host-1 network-scripts]# cat ifcfg-bond0.3080 DEVICE=bond0.3080 VLAN=yes BOOTPROTO=static ONBOOT=yes BRIDGE=br3080 MTU=1500
The “.3080” extension and “VLAN=yes” are what makes this a VLAN.
Feel free to create additional VLANs the same way for NFS traffic, iSCSI traffic, management traffic, or VM traffic.
4. Finally, we give our virtual bridge an IP:
[root@infra-host-1 network-scripts]# cat ifcfg-br3080 DEVICE=br3080 TYPE=Bridge BOOTPROTO=static ONBOOT=yes IPADDR=172.20.80.45 NETMASK=255.255.255.0 DELAY=0 MTU=1500
Notice that the type is “Bridge”, not “Ethernet” here, and we forced a normal MTU size here. (Having the big pipe use Jumbo Frames allows us the choice at the individual device level.)
5. Restart the network service.
service network restart
When you create a VM with RHEL6+KVM, this is what is presented to the guest operating system:
But the VM just sees it as “eth0”….
So that’s my post for the week; keep your eyes out for the next post, as I’ll show you how to do the same thing in RHEV!
Hope this helps,