Using RHEV 3.6 (Bonding Host Interfaces)

Hi folks! In one of my last posts, I hinted at bonding the underlying physical interfaces as part of the overall solutions for providing full High Availability. The hint was not for HA, but for an upcoming post.. HA is still very much a part of the modern data center, especially when you’ve got applications that users depend on. And as with just about everything thing in RHEV, configuring bonded interfaces is straightforward.

Let’s get started.

Once you’re logged in, select the “Hosts” tab, then the “Network Interfaces” tab below, and then briefly review the listed network interfaces. Figure out which interfaces are onboard, which are PCI, etc. Once you know what you want to do, click on “Setup Host Networks” to launch the dialog for bonding interfaces.

It goes without saying, but I’ll say it anyway: you’ll want to bond only like interfaces. You don’t want to bond a 1GbE interface with a 10GbE interface. If you can avoid bonding an Intel interface with a Broadcom interface, that would also be preferable. Performance and troubleshooting become much easier when you bond like interfaces.

Once you determine which interface you want to start with, right click on that interface, then select “bond with” and choose the interface that it will be bonded with. While RHEV supports all bonding modes, VM traffic requires specific bonding modes: 1 (active backup), 2 (load balance), 4 (link aggregation), or 5 (balanced load balance). Once the interface is bonded, any and all available logical networks can be dragged and dropped onto the bonded interface. NOTE: mode 4 will require that your switches also be configured for “LACP”.

From there, all you need to do is configure IP information. To do that, simply click on the “pencil”, and then it’s identical to configuring a non-bonded interface.

Let’s check out the demo (best viewed in full screen):

A good recommendation and/or strategy for interfaces on hypervisors would be to used the onboard 1GbE interfaces for management traffic and then 10GbE for everything else. RHEV will allow you to assign management traffic to any logical network that you need. Then bond the underlying physical interfaces and carve everything up into VLANs.

Using this recommendation, not only would “storage” get it’s own traffic, but different RHEV data centers would get their own storage VLANs. So maybe group “X” gets NFS on 110, iSCSI on 111, management on 112, primary VM traffic on 113, and private application on 114. Then group “Y” gets NFS on 210, iSCSI on 211, management on 212, so on and so forth…

Hope this helps,

Captain KVM

Agree? Disagree? Something to add to the conversation?