Maximizing your 10GB Ethernet in RHEV

Hi folks,

My last post was about creating channel bonds, virtual bridges, and VLANs in order to make the best use of 10GbE. This time around, we’re going to apply the same logic and planning to a Red Hat Enterprise Virtualization (v3.1 if you’re interested) environment.

If you need a primer on bonding, bridging, and tagging (VLANs), take a peak at the previous article. It won’t take long to read. Otherwise, let’s dig in.

I will tell you that there is a “gotcha” to the VLANs in RHEV 3.1, so pay close attention – If you want to use VLANs, then everything has to be VLANs. This is not a bad way to go, but if you don’t setup the “RHEVM” logical network as a VLAN, you will run into problems as you try to mix VLAN and non-VLAN on the same channel bond. You need to configure the “RHEVM” logical network before the others AND before you create your logical cluster.

By the way, we’re also using the same networking model as we used in the KVM-based article:

RHEVH FlexPod Network v3

I should also tell you that in this tutorial, I already have 3 Logical Networks that represent a RHEV-M/Management VLAN, an NFS only VLAN, and a “Public VLAN” that has access to the internet.

Logical Networks & VLANs

First thing, log into the RHEV web admin portal and select the  “Data Centers” tab, then the “Logical Networks Tab”, then select “rhevm” and “edit”. You’ll see a dialogue box very similar to the graphic below – the difference is that you won’t be blocked from editing.

rhevm_vlan

Next, we’ll give it a description. Select all 3 check boxes, being sure to enter the VLAN that you have already configured on the physical network. I’m forcing 1500 MTU as I already have Jumbo Frames enabled on the ‘real’ network. I don’t need Jumbo Frames on a management network. Finally, select the RHEV Cluster(s) that will have access to the VLAN and click apply, then close.

Next, lets work on our NFS logical network/VLAN. Select “NFS”, click “Edit”. Again we’ll check all 3 boxes. Be sure that the VLAN selected matches the NFS VLAN that should already be configured on the physical network. We’ll keep our Jumbo Frames of 9000. We likely don’t need to force Jumbo Frames as our physical network is configured that way, but I like to be sure. Select the RHEV Cluster(s) that will have access to the VLAN and click apply, then close.

NFS_vlan

I don’t need to explain how to configure the “Public” VLAN as it uses the same options as “rhevm”, except that it has its own VLAN.

Channel Bonding & Virtual Bridges

The channel bonding is configured on the hypervisors. Move to “Hosts” tab, and select a hypervisor. From there, select the Network Interfaces tab and then click on “Setup Host Networks”. Right click on “eth1” and select bond with “eth0”. Now the channel bond is configured. Hover the cursor to the far right of the “bond0” box and a small pencil icon will appear – click it. You will see a dialogue box like the graphic below. This is where you specify the bonding mode to be used.

bond

Click “OK”, then drag and drop each of the logical networks to the right of “bond0”. This is what creates the virtual bridges that are eventually presented to the virtual machines. Hover the cursor just to the left of the small green “VM” icon on one of the logical networks – click it. Assign IP information to that particular interface.

SetupHostNetwork

Repeat the last 2 steps (Hover & Assign) for each logical network and the virtual bridging is complete.

At this point, the right hand side of the “Logical Networks” on the hypervisor will look like this:

rhev_vlans

 

And this is what those virtual bridges look like to a VM in RHEV – in this case we’re using 3 “public” VLANs, an NFS VLAN, and a RHEV-M (management VLAN). And don’t worry, I removed the internet access from the Public VLANs before I deployed the Oracle VMs that you see below… 🙂

vm_vlans

So again, that’s it. It’s fairly straightforward. And yes it might be a lot of “point and click”, but there is still the RESTful API or PythonSDK that you could use to automate that… Hmmm, maybe another blog post idea..

Hope this helps,

Captain KVM

 

 

6 thoughts on “Maximizing your 10GB Ethernet in RHEV”

  1. Hi Captain! 🙂

    Your article is a few months old, but I’m just starting with oVirt now, and working through physical versus logical network configuration myself, and your article was very helpful putting it all together! oVirt, of course, is similar enough in configuration to RHEV in this respect so your examples apply equally well to my environment.

    I have a somewhat different network configuration, and I’ve been scouring the web, trying to find examples to see if I’m going about the logical network mapping in the correct way. Unfortunately, while I’m sure my circumstances exist, I haven’t quite found another example quite like it. Your feedback (or the feedback of others reading your blog) would be helpful to me.

    I’m just moving to oVirt from having no virtualization at all. Right now, in our machine room all servers are on a VLAN managed by our network operations group on a Cisco switch that we do not control, in a riser room for which we do not have access. The switch ports are cabled to our machine room, they are all active, and all on “our” VLAN. We simply plug in servers, assign an IP, and we’re good to go. All ports are 1 Gb speed and provide for external/public access.

    In addition to the “building switch”, we have our own 48 port 1 Gb switch that sits in our machine room. This is for a private network between all servers in the room. It’s a basic switch (a Dell PowerConnect), and we don’t configure any VLANs at all.

    Right now, there’s at least two NICs on every server in our machine room – one on the public VLAN allocated to us (a /24) , and another on our “machine room private network” (a different /24). Where my situation, and your example start to differ is here because I have two totally separate switches, and you have one. Do the VLANs that you define in RHEV need to match VLANs as defined on the switches themselves? You don’t specifically mention switch configuration other than configuring bonding.

    In the oVirt world that I’m designing, each node will have 4 network ports – two 10 GbE copper, and two 1 GbE copper. There will probably only be from 4 to 6 nodes. The way that I planned to configure the network on each node – 1 x 1 GbE port goes to our machine room private network switch which would be used to give each virtual host access to our machine room private network. This would also serve as oVirt mgmt/display network. The next 2 x 1 GbE ports (yes, one of the 10GbE copper ports would be used as 1 GbE) would be used to bond two of the public interfaces into a 2 GbE link (as you described very well in your article). For this, the networking team would have to enable 802.1ad on the ports in question, and I’ve asked, and they can do that for me. Finally, the last 10GbE port would be used to provide a storage network between our NFS server, and the nodes. Where does 10gE come in? Again, we will buy a separate 12 port 10GbE copper switch. Initially, I was considering using this link for just storage. However, since the to be released in the near future oVirt 3.3 should include the ability to tag a logical network to be used for migration purposes, I think I’d use the 10 GbE link for a storage AND migration network. mgmt and display traffic are relatively minimal, but migration traffic might be more which could use the extra capacity of the 10 GbE link.

    In your situation, you have 1 switch with 3 separate VLANs. In my case, I have 3 separate switches – 1 where all the ports are on one VLAN, and the other two where I haven’t defined VLANs at all. The question is – does it matter? Do I need to configure VLANs here? Would they help me in some way? Your previous article said, ” Imagine a brand new interstate highway with a speed limit of 120mph and no painted lanes. This is what a 10GbE pipe is like without VLANs to segregate different traffic types like management, storage, internet access, private application, or VDI. It would be a huge mess, difficult to manage, and most likely insecure.”. Does this really apply to my 1 10GbE link used for storage, and the odd migration?

    One place where oVirt, and RHEV makes things a little tricky to my situation is that when I give the public network link a static address on the network, the configuration tab does not let me define a GATEWAY! This results in the nodes being able to talk to each other, and other servers on the same VLAN, but not able to talk to anyone else! If you configure the interface with DHCP, you get a gateway, or you have to hardcode the results in the node configuration, but this would probably be deleted during an upgrade, so its sort of hacky. As I understand it, the lack of option to specify a gateway is maybe by design. I suspect that if everything is managed by one switch, you have multiple VLANs, but only one gateway?

    Part of the reason we will have 2 switches in our machine room in addition to the building switch has everything to do with cost. The basic 48 port 1 Gb switch is quite cheap in comparison to a more “enterprise” model, and the 12 port 10GbE switch probably costs less than what it would cost to have a single 10GbE port on the Cisco switch 😉 Even if we had 1 switch in our machine room, we’d still have a separate building switch anyway.

    Thanks for any help that you can provide.

    PS: In your diagram, guest OS in both articles shows two eth1 – is that an error?

    1. Hi Roo,

      Sorry for the delayed response.. I’ve been out traveling for both work and pleasure, but I’m back now. You win the prize for ‘longest comment/question’. 🙂

      Because of the length, I’ve opted to copy/paste your post in my reply, so that I am answering your paragraphs one by one. You’ll see my replies as “ck >”.

      Your article is a few months old, but I’m just starting with oVirt now, and working through physical versus logical network configuration myself, and your article was very helpful putting it all together! oVirt, of course, is similar enough in configuration to RHEV in this respect so your examples apply equally well to my environment.

      ck > Yes, the commands and buttons are the same. oVirt will likely be ahead in package versions, but the admin capabilities are the same.
      I have a somewhat different network configuration, and I’ve been scouring the web, trying to find examples to see if I’m going about the logical network mapping in the correct way. Unfortunately, while I’m sure my circumstances exist, I haven’t quite found another example quite like it. Your feedback (or the feedback of others reading your blog) would be helpful to me.

      ck > I’ll see what I can do answer-wise and hopefully some of my other readers will chime in..

      I’m just moving to oVirt from having no virtualization at all. Right now, in our machine room all servers are on a VLAN managed by our network operations group on a Cisco switch that we do not control, in a riser room for which we do not have access. The switch ports are cabled to our machine room, they are all active, and all on “our” VLAN. We simply plug in servers, assign an IP, and we’re good to go. All ports are 1 Gb speed and provide for external/public access.

      ck > having no access to the switch or the room may not be an issue if you have the ability to make requests.. I’m assuming the switch and room are managed by a different group..

      In addition to the “building switch”, we have our own 48 port 1 Gb switch that sits in our machine room. This is for a private network between all servers in the room. It’s a basic switch (a Dell PowerConnect), and we don’t configure any VLANs at all.
      Right now, there’s at least two NICs on every server in our machine room – one on the public VLAN allocated to us (a /24) , and another on our “machine room private network” (a different /24). Where my situation, and your example start to differ is here because I have two totally separate switches, and you have one. Do the VLANs that you define in RHEV need to match VLANs as defined on the switches themselves? You don’t specifically mention switch configuration other than configuring bonding.

      ck > yes, the VLANs need to match end-to-end. Your VM Ethernet ports should ‘just work’, but the physical interfaces on the hypervisors and the switch ports need to have the same VLAN.

      In the oVirt world that I’m designing, each node will have 4 network ports – two 10 GbE copper, and two 1 GbE copper. There will probably only be from 4 to 6 nodes. The way that I planned to configure the network on each node – 1 x 1 GbE port goes to our machine room private network switch which would be used to give each virtual host access to our machine room private network. This would also serve as oVirt mgmt/display network. The next 2 x 1 GbE ports (yes, one of the 10GbE copper ports would be used as 1 GbE) would be used to bond two of the public interfaces into a 2 GbE link (as you described very well in your article). For this, the networking team would have to enable 802.1ad on the ports in question, and I’ve asked, and they can do that for me. Finally, the last 10GbE port would be used to provide a storage network between our NFS server, and the nodes. Where does 10gE come in? Again, we will buy a separate 12 port 10GbE copper switch. Initially, I was considering using this link for just storage. However, since the to be released in the near future oVirt 3.3 should include the ability to tag a logical network to be used for migration purposes, I think I’d use the 10 GbE link for a storage AND migration network. mgmt and display traffic are relatively minimal, but migration traffic might be more which could use the extra capacity of the 10 GbE link.

      ck > Using a 10GbE as a 1GB link is a huge waste. if you’re getting a 10GbE switch, it will be capable of VLAN tagging. There is no issue with starting off with 1 VLAN. You can always add more as you grow. But just having a 10GbE link for migration isn’t sound. Also, Bonding a 1GB with a 10GbE interface isn’t a great idea either. You generally want to bond “like” interfaces. In your scenario, you probably want to just stick with your public traffic (the switch you don’t control), the private switch (the one you do control), and your 10GbE switch. The private switch can handle all of your private application data, your management traffic, and your migration traffic. Your 10GbE switch is the right choice for storage. If you want more bandwidth for your private traffic, put it all on the 10GbE network – just use VLANs.

      In your situation, you have 1 switch with 3 separate VLANs. In my case, I have 3 separate switches – 1 where all the ports are on one VLAN, and the other two where I haven’t defined VLANs at all. The question is – does it matter? Do I need to configure VLANs here? Would they help me in some way? Your previous article said, ” Imagine a brand new interstate highway with a speed limit of 120mph and no painted lanes. This is what a 10GbE pipe is like without VLANs to segregate different traffic types like management, storage, internet access, private application, or VDI. It would be a huge mess, difficult to manage, and most likely insecure.”. Does this really apply to my 1 10GbE link used for storage, and the odd migration?

      ck > yes your VLANs matter, as I described earlier. And yes it applies to your 10GbE switch. You expect to have less than 10 hypervisors to start. VLANs are not only good for security, but it makes it much easier to troubleshoot as well. If you don’t plan well now, it will only get worse when you grow.

      One place where oVirt, and RHEV makes things a little tricky to my situation is that when I give the public network link a static address on the network, the configuration tab does not let me define a GATEWAY! This results in the nodes being able to talk to each other, and other servers on the same VLAN, but not able to talk to anyone else! If you configure the interface with DHCP, you get a gateway, or you have to hardcode the results in the node configuration, but this would probably be deleted during an upgrade, so its sort of hacky. As I understand it, the lack of option to specify a gateway is maybe by design. I suspect that if everything is managed by one switch, you have multiple VLANs, but only one gateway?

      ck > Think of it this way: you probably don’t want your hypervisor nodes to have much access to anything but RHEV-M and storage. You don’t want your hypervisors to have internet access, and you likely don’t want outsiders to have access to your hypervisors. What else do you want your nodes to have access to?

      Part of the reason we will have 2 switches in our machine room in addition to the building switch has everything to do with cost. The basic 48 port 1 Gb switch is quite cheap in comparison to a more “enterprise” model, and the 12 port 10GbE switch probably costs less than what it would cost to have a single 10GbE port on the Cisco switch Even if we had 1 switch in our machine room, we’d still have a separate building switch anyway.

      ck > yeah, cost is a bitch for everyone, even the Captain…

      Thanks for any help that you can provide.
      PS: In your diagram, guest OS in both articles shows two eth1 – is that an error?

      ck > yes, it is in fact an error… it should be eth0, eth1, and eth2.. but you probably know that. (good catch.)

      hope this helps,

      Captain KVM

    2. Coming in a little late to the conversation, however, I recently ran into a similar issue and I thought I would comment. This past week, I had a need to add a gateway entry for one of my defined logical networks in RHEV, however, as Roo pointed out, this can not be performed currently within the RHEV-M management interface. The addition of one particular upcoming RHEV 3.3 feature was mentioned, the ability to define a dedicated migration network. Interestingly enough, an additional feature coming in RHEV 3.3 is the ability to define gateways for logical networks, if so desired…see page 42 of Andy Cathrow’s presentation from this year’s Red Hat Summit: http://rhsummit.files.wordpress.com/2013/06/cathrow_thu_450_rhev.pdf

      As Roo outlined, the only way to do this currently is manually. It’s a little easier for thick RHEL hypervisors, as you don’t have to deal with the extra step of persisting changes like you do on thin RHEV-H installs. In either case, what I did was utilize iproute2 functionality. Here are a couple articles outlining how to configure multiple “default” routes:
      http://kindlund.wordpress.com/2007/11/19/configuring-multiple-default-routes-in-linux/
      http://www.rjsystems.nl/en/2100-adv-routing.php
      After performing a similar configuration as mentioned in the above two articles, all of my routing issues have been taken care of.

      Thanks for making this blog and forum available, Captain…great stuff!

      1. Hey Mike,

        Thanks for dropping by, and a HUGE thank you for adding to the conversation. You are spot on with everything, and thanks for the kind words. I ~think~ (not positive here) that the focus going forward will be on the thick hypervisor. Red Hat is supposedly working on a way to custom build thin hypervisors so that customers can add in additional plug-ins.. but that seems like a huge mess to me when you can simply strip things down in a thick hypervisor. Don’t get me wrong, I love the auto-config of iptables, selinux, and tuning for virtualization, but the same can be done with templates, chef, puppet, etc.

        Back to your comment about gateways.. I found that adding a gateway to storage networks in RHEV sometimes hoses things up, so I definitely like to ability to add or not add as I see fit.

        thanks again,

        Captain KVM

  2. Hi Captain!

    Thanks for your responses. I hope you had a nice vacation!
    I didn’t intend for my response to be so BIG 🙂 I’ll try to keep this one a tiny bit shorter! 😉 (seems like it didn’t work!)

    As for VLANs – My confusion over the VLANs in your picture (3080/3081/3084) is because of a lack of use/knowledge with VLANs on my part, I guess. That’s why we have a networking Department! 🙂 It seems I had too basic a knowledge of VLANs! I was under the impression that switch ports are assigned to VLANs, NICs are plugged into those switch ports, only devices on the same VLAN talk to each other (unless traffic is routed between VLANs), and you never need to tell the system what VLAN its on. Simple! Through your example, I can see that VLANs can get more complicated … in your case, the dual port 10GbE card is accepting a link carrying multiple VLANs. How would the two incoming ports be configured on the switch? as “trunk” links?

    I’m puzzled by your note about not having the hypervisors with access to anything other than management network and storage. I’m sure I’m reading this wrong or misunderstanding the terminology. Every VM on every node needs to have access to the machine room private network, and at least half of my VMs need a public address (eg. web servers), but for high availability reasons, any node should be able to take over the other nodes VMs, so all the nodes need access to all the networks. If my hypervisors don’t have access to both the public network and machine room private network, then how would I go about creating VMs there with those requirements?

    I wasn’t going to bond 1G with10 GbE… I’ve actually modified my thoughts a bit since my first post — On each node, I’d have 6 network ports. 2 x 1 GbE bonded for public network access, 2 x 1 GbE bonded for machine room private network, then split the 12 port 10GbE switch into two VLANs, and assign 6 ports to storage, and 6 ports to VM management for simplicitly. I’d love to just replace the machine room private network with 10GbE and use it for machine room private network, and virtual machine management, but with only 12 ports, it’s not enough, and I don’t have the funds to buy a bigger switch. Maybe I could somehow link the switches – one is a Dell PowerConnect 2724 and the other a Netgear XS712T, but the link between them is likely to be too underpowered.

    Thanks!

    1. Hi Roo,

      Yes, you can create trunks on the switch for your 10GbE physical links. As for node access vs vm access, think of this simple example:
      Imagine a server with 2 Ethernet ports. 1 is used by the hypervisor and it only reaches RHEV-M and other hypervisors. The other Ethernet port is configured as a virtual bridge, and all VM VLANs go through that bridge. The bridge itself does not have to have a default gateway, it just needs to be plugged into a port that is configured for all of the necessary VLANs. Let’s take this a step further by creating VLANs on the virtual bridge.. if the bridge is eth1, the vlan would be eth1.3080 (or whatever vlan).. This makes it very easy to expand. As you need more networks, you simply add a new vlan to the vbridge and the physical switch. No disruption.

      Again, the hypervisors themselves only need to talk to RHEV-M and each other.

      I like your revised network plan, except for 1 thing. Your VM management traffic will work just fine on the 1GB link. And I don’t think (happy to be wrong) you can link your Dell and Netgear switches.

      Captain KVM

Agree? Disagree? Something to add to the conversation?