High Availability for Red Hat Virtualization Manager 4.1

Hi folks, if you missed Red Hat Summit 2017 last week, it was great time in Boston. As promised, I’m uploading my presentation on HA for RHV-M 4.1 – hosted engine. Although, I’m doing it a little differently this time. I took the time this week to actually re-record it including the demos! This way you get a flavor of how I actually presented it last week.It turned out a little shorter in the re-recording, as it only clocked out at about 30 minutes and my session was about 10 minutes longer. But it’s all good. I walk through what hosted engine is, how it compares to standard deployment, why you would care if RHV-M goes down, and how to actually deploy hosted engine.

The embedded demos walk through the deployment of RHVH, the deployment of hosted engine via Cockpit, then a forced failover courtesy of a guest Velociraptor. Ok, not really, I just yanked the power on the underlying host.. but watch the demo anyway..

(best viewed in full screen, give it a moment to get in focus..)

One of the things that I really tried to emphasize in both the original presentation and the re-recording is that while hosted engine is a great solution, your end use case should determine whether or not it’s the best layout for your particular environment.

As always, your comments and questions are welcome!

hope this helps,

Captain KVM

9 thoughts on “High Availability for Red Hat Virtualization Manager 4.1”

    1. Hi Lisa,

      It *used* to be that if you needed optimal performance out of RHV-M, we would tell you to put the Postgres database on a separate host. However, there have been many, many improvements over the last few releases such that we’re very confident in leaving the database as part of the deployment. Beyond that, look at things in this order for the RHV-M appliance (and virtualized applications in general):
      Amount of memory
      Storage performance
      Network bandwidth/performance
      CPU speed

      CPU speed is (most of the time) the lowest priority for “general” virtualization. Yes, there are certainly CPU-bound apps and workloads, but by and large memory is the big deal for virt, then storage. Keeping things flowing in the network is just as important, so don’t be afraid to go crazy with VLANs. If you can afford 10GbE, do it. Otherwise, look at LACP for 1GbE.. And keep your management traffic separate from VM traffic, and your storage traffic separate from everything.

      hope this helps,

      Captain KVM

    1. Hi Frank,

      It’s been a few years since I did anything with UCS.. There is an official kbase article for RHEV 3, but it should not be any different for RHV 4. That article also has links to the official Cisco docs. The hosted-engine setup itself shouldn’t be any different as the VM-FEX is really the “special sauce” for the VMs.

      hope this helps,

      Captain KVM

      1. Since VMFEX requires some special tweaks to the libvirt network definitions and the domxml of the guest, I’d venture to say VMFEX support for hosted engine would be a good RFE to request. It’s not as trivial as setting up yet another bridge, so some work would probably have to be invested in making this happen. The VMFEX vdsm hook documentation describes the steps needed pretty well

  1. Hi Jon,

    I found the following article online from a few years ago but it is likely out-dated now.
    Red Hat Enterprise Virtualization 3 and NetApp Storage: Best Practices Guide (2012)

    Where could I find similar information for current version of RHEV and Ovirt? The main interest is in storage configuration and performance estimates. Could you or a qualified RH rep. assist? We’re building out HPC resources with administrative services on kvm and planning Openstack deployment. Your expertise could be very helpful.

    Thank you

    1. Hi Kevin,

      Thanks for reaching out. I created that particular document while I still worked at NetApp. You are correct in that it is quite outdated at this time. RHEV 3 came out in January of 2012 and actually the entire 3.x version goes into retirement at the end of September. I’m sure the version of NetApp Data ONTAP used in that document is outdated as well.

      The current version of RHV is 4.1; 4.0 came out August 2016, 4.1 in April 2017, both are supported. All of the current product documentation can be found at https://access.redhat.com/documentation/en/red-hat-virtualization/ and you clearly found my blog.. I’ve got some demos and such here.

      I’m not sure who the Public Sector/SLED rep for Rutgers would be for you, but you can fill out this short form for someone to reach out to you: https://www.redhat.com/en/about/contact – if someone doesn’t ping you in the next few (3-5) business days, hit me up again here and I’ll find someone for you. 😉

      hope this helps,

      Captain KVM

  2. I keep reading that you need LVM Thin Pool provisioning. I’ve never seen an explanation as to why.

    I guess I also wonder about forward and reverse lookup.

    Do you know and/or have any links to documentation that speaks to the specific results of not having these (ie: would it break, work unreliably, load and break later, or just not work, etc)?

    Thanks for the great walk-through.

    1. Hi Matt,

      Sorry for the delay.. was out on holiday. LVM Thin Pools are required for the lightweight hypervisor due to how the actual layout. The partitions are backed by writable snapshots, and that requires the LVM Thinpools. If you don’t use that, then the deploy will ultimately fail. The DNS has to do with ensuring that everything resolves consistently at an enterprise level. That includes RHV-M and the hosts – whether the application internals are doing something or even something as mundane as adding a host to a cluster. Relying on /etc/hosts is inconsistent and has the ability to break many things.

      Hope this helps,

      Captain KVM

Agree? Disagree? Something to add to the conversation?