My last post was about creating channel bonds, virtual bridges, and VLANs in order to make the best use of 10GbE. This time around, we’re going to apply the same logic and planning to a Red Hat Enterprise Virtualization (v3.1 if you’re interested) environment.
If you need a primer on bonding, bridging, and tagging (VLANs), take a peak at the previous article. It won’t take long to read. Otherwise, let’s dig in.
I will tell you that there is a “gotcha” to the VLANs in RHEV 3.1, so pay close attention – If you want to use VLANs, then everything has to be VLANs. This is not a bad way to go, but if you don’t setup the “RHEVM” logical network as a VLAN, you will run into problems as you try to mix VLAN and non-VLAN on the same channel bond. You need to configure the “RHEVM” logical network before the others AND before you create your logical cluster.
By the way, we’re also using the same networking model as we used in the KVM-based article:
I should also tell you that in this tutorial, I already have 3 Logical Networks that represent a RHEV-M/Management VLAN, an NFS only VLAN, and a “Public VLAN” that has access to the internet.
Logical Networks & VLANs
First thing, log into the RHEV web admin portal and select the “Data Centers” tab, then the “Logical Networks Tab”, then select “rhevm” and “edit”. You’ll see a dialogue box very similar to the graphic below – the difference is that you won’t be blocked from editing.
Next, we’ll give it a description. Select all 3 check boxes, being sure to enter the VLAN that you have already configured on the physical network. I’m forcing 1500 MTU as I already have Jumbo Frames enabled on the ‘real’ network. I don’t need Jumbo Frames on a management network. Finally, select the RHEV Cluster(s) that will have access to the VLAN and click apply, then close.
Next, lets work on our NFS logical network/VLAN. Select “NFS”, click “Edit”. Again we’ll check all 3 boxes. Be sure that the VLAN selected matches the NFS VLAN that should already be configured on the physical network. We’ll keep our Jumbo Frames of 9000. We likely don’t need to force Jumbo Frames as our physical network is configured that way, but I like to be sure. Select the RHEV Cluster(s) that will have access to the VLAN and click apply, then close.
I don’t need to explain how to configure the “Public” VLAN as it uses the same options as “rhevm”, except that it has its own VLAN.
Channel Bonding & Virtual Bridges
The channel bonding is configured on the hypervisors. Move to “Hosts” tab, and select a hypervisor. From there, select the Network Interfaces tab and then click on “Setup Host Networks”. Right click on “eth1” and select bond with “eth0”. Now the channel bond is configured. Hover the cursor to the far right of the “bond0” box and a small pencil icon will appear – click it. You will see a dialogue box like the graphic below. This is where you specify the bonding mode to be used.
Click “OK”, then drag and drop each of the logical networks to the right of “bond0”. This is what creates the virtual bridges that are eventually presented to the virtual machines. Hover the cursor just to the left of the small green “VM” icon on one of the logical networks – click it. Assign IP information to that particular interface.
Repeat the last 2 steps (Hover & Assign) for each logical network and the virtual bridging is complete.
At this point, the right hand side of the “Logical Networks” on the hypervisor will look like this:
And this is what those virtual bridges look like to a VM in RHEV – in this case we’re using 3 “public” VLANs, an NFS VLAN, and a RHEV-M (management VLAN). And don’t worry, I removed the internet access from the Public VLANs before I deployed the Oracle VMs that you see below… 🙂
So again, that’s it. It’s fairly straightforward. And yes it might be a lot of “point and click”, but there is still the RESTful API or PythonSDK that you could use to automate that… Hmmm, maybe another blog post idea..
Hope this helps,