Comparing RHEV, vSphere, and Hyper-V – pt3

In the previous comparison posts, we took a peek at how RHEV, vSphere, and Hyper-V compared in some cost scenarios as well as major features. In this final installment, we’ll take a look at what I believe is the most important feature. Let me give you a hint:

RHEV does not win here. Yet.Arguably the most important “feature” for a virtualization platform is integration with enterprise storage. This is where vSphere is currently the leader. In spades. They have remarkable integration with EMC, NetApp and others. Hyper-V has integration with NetApp and others.

So what exactly do I mean when I say “integration”? Specifically, I’m talking about moving beyond just a storage protocol and actually providing valuable solutions to virtualization related problems. Just being able to say “yeah, it works with NFS, iSCSI, and FCP” is not enough.

In the enterprise, the ability to offloading copy/clone activities from the hypervisor to the storage array are critical. Providing data protection to data stores that is transparent to users and the hypervisor is critical. Storage plug-ins that exist within the virtualization manager are really convenient and streamline the administration process.

What about RHEV? What kind of integration does RHEV have with any enterprise storage? Well, to be honest it’s rather light – for now. Here is an ultra-brief (sans Hyper-V) comparison of the integrations available for vSphere and RHEV:

RHEV 3.0 vSphere 5
Storage Plugin for Mgmt Console no yes
Integrated Data Protection Tools no yes
Rapid Cloning Tool for VMs no yes
Cloning Data Stores for DR/dev/test no yes

Again, it’s just a brief peek, but it should be enough to illustrate the gaps in integration. That’s not to say that RHEV doesn’t have options – I’ve written a few documents that cover using Snap Creator, SnapMirror, and MetroCluster (all NetApp products) for RHEV. They all work well, but there isn’t much in the way of real integration.

Getting that integration work rolling is my day job at NetApp. We’ve got some heavy lifting underway right now as we work with the great folks that contribute and guide oVirt.

What’s oVirt, you ask? If you’re familiar with the concept that all of Red Hat’s RHEL development occurs in Fedora, then the analog follows oVirt and RHEV. That is to say that Red Hat does all of their development work for RHEV in oVirt. oVirt itself is the upstream project for KVM and RHEV, and there are many folks, both commercially sponsored and truly independant, that are involved in growing the ecosystem around RHEV and KVM.

So what kind of integration is in the works for RHEV? And doesn’t this imply NetApp specifically, considering the author of this blog? (Man, you’re a clever reader…)

Well, here’s what’s in the works:

Tool Integration Point Purpose
Rapid Cloning Tool for VMs Integrating NetApp FlexClone with RHEV-M Create VMs on demand from the Storage Array
Snap Creator KVM Module Integrating NetApp Snap Creator and RHEV-M Manage complex backup requirements for RHEV & KVM
NetApp Storage Plugin for RHEV Integrate NetApp tools into RHEV-M Provision & manage NetApp storage from RHEV-M

WARNING!! SHAMELESS OPEN SOURCE RECRUITING EFFORT BELOW!!

So, how do you get involved? (hint, hint.) For starters, go to http://ovirt.org and download the packages. Dig around the site and check things out. Look for an area that interests you and start hacking the code. Tell your geek friends to check it out.

And have fun with it.

Hope this helps, and I hope you join the oVirt efforts,

Captain KVM

9 thoughts on “Comparing RHEV, vSphere, and Hyper-V – pt3”

  1. I was wondering what your thoughts on RHEV + pNFS are? I know RHEL6 supports pNFS, and in theory parallel object storage + server virtualization should have less storage management overhead then shared block storage (goodbye storage vMotion).

    1. Hi Josh,

      Thanks for taking the time to post a comment. In theory, pNFS + RHEV should be be absolutely great. I haven’t seen it practice yet, because it’s really very new. RHEL 6.2 supports the pNFS client in “Tech Preview”. RHEV-H doesn’t yet support it yet, but Red Hat has it on the road map. And NetApp just released support for the matching pNFS server in Data ONTAP 8.1 Cluster-Mode. The NetApp implementation of pNFS really offers multipathing for NFS without having to use a separate drive. NetApp Cluster-Mode allows you to move your volumes from controller to controller transparently. That’s what will provide the ‘storage v-motion’ like capability.

      Captain KVM

  2. Hi Capn’!

    Thanks for the great series of articles – very interesting and useful but I still have a couple of unanswered questions:

    1 – I’ve done some rudimentary benches of SQL server 2012 running under ESXi 5, PVE 2.1 and Hyper V. ESXi and HV performed approx. the same and considerably better than PVE/KVM so am I likely to see a similar difference in perf. if I was to do the same test under RHEV 3 ie does KVM perform the same under PVE as it does RHEV? I’d test it myself if I had the hardware to do so but I don’t and I cannot find benchmarks or stats for 2008r2 running under different hypervisors anywhere.

    2 – Would it be possible to virtualise OSX server under RHEV and if so do you know how that would stack up against ESXi 5 on the same hardware?

    Thanks!

    Dan

    1. Hi Dan,

      You’ve got some interesting questions, to be sure. (Interesting is a good thing.).

      1-a. Windows 2008r2 will require a few tweaks to run well on any hypervisor that doesn’t rhyme with “Hyper-V”. I actually described part of this in another comment in the last few days (your timing is impeccable!).

      Quoting myself,

      “Thanks for dropping by. I hear your particular complaint quite a lot – no issues with Linux guests, but slow Windows guests. The first thing to check is that the NIC drivers on the guest are up to date – in your case either virtio or e1000. If that doesn’t change anything, check out this link: http://www.linux-kvm.org/page/WindowsGuestDrivers/kvmnet/registry . It offers guidance around getting better throughput via editing the Windows registry. Essentially in Windows, larger packets will cause buffering and a large number of context switches – both will put a drag on your performance.”

      Check out the link for some tips on tuning Windows w/ KVM.

      1-b. I’m not familiar with how Proxmox implements KVM, so I don’t feel comfortable saying that performance of Proxmox/KVM is on-par with RHEL/KVM. Think of it this way – even in a pure Red Hat environment, there could be a difference between RHEV-H (v3) and RHEL 6.2+KVM, even though they are running the same kernel and modules. If the thick hypervisor (RHEL 6.2+KVM) also has a GUI and a bunch of other services, it’s not going to have the same performance as RHEV-H.

      1-c. I’m not surprise that there isn’t much in the way of benchmarks for Win2k on any other hypervisor than Hyper-V. Microsoft may do the testing internally, but they aren’t going to post them unless they are favorable. But to be fair, the same could be said for Red Hat or VMware.

      2. I’ve never tried to virtualize OSX. Obviously it isn’t something that Red Hat will support, but they certainly won’t stop you from running it on RHEL/KVM or RHEV. But I can tell you that current benchmarks show KVM outperforming just about everyone for virtualizing Linux… Perhaps the BSD base for OSX will also benefit from that..

      I hope this helps, don’t hesitate to ask more questions or clarifications,

      Captain KVM

      1. Hi Dan,
        I’m getting started with datasenter virtualisation planning for my shop, and need to find products that will allow for virtualisation of both Windows, OS X and possibliy Linux hosts.
        Would you care to explain why it’s obvious that Red Hat wouldn’t support Apple OS X as both host and guest (as Apple lisencing requires) when you can do it with vSphere?
        I could live without a support contract that covers issues with OS X as a guest or Apple HW as a host, but getting it to work should require no more fiddling than maybe slipstreaming a hardware driver for an extra NIC.
        Cheers,
        Martinus

        1. Martinus,

          Thanks for stopping by and taking the time to leave some questions. As far as I know, no current enterprise hypervisor (Vmware ESX/ESXi, Citrix XenServer/XenDesktop, RHEL+KVM/RHEV, MS Hyper-V) officially supports OS X as a guest. The demand for OS X in the enterprise is really in the workstation and laptop segments, not the server space or virtual desktop space. Having said that, you installing OS X as a guest on any of those virtualization platforms will only result in you not getting support for the guest. It will not do anything to the support for the hypervisor. As far as using an Apple server running RHEL+KVM or RHEV-H, Red Hat wouldn’t care; as far as the server platform, they only care that you have Intel or AMD CPU’s with the proper virt extensions as well as sufficient memory.

          As for “getting it to work”, I agree, it should be fairly straightforward.

          Captain KVM

  3. Thanks for the quick reply!

    I’ve checked out that link and that is very handy info but applying those tweaks would’ve made no difference to my SQL server benchmarks as I was running sqlstress on the same VM as SQL server so network latency and drivers etc wasn’t getting testing- instead I was testing CPU and disk IO and yes I had the very latest virtio drivers installed and 2008r2 was fully updated.

    I have emailed RH and requested benchmarks or any other info they may have on virtualizing Windows server under RHEL vs vSphere but I suspect I’ll have to do it myself unless someone out there beats me to it.. feel free to jump right in there and beat me to it readers! 😉

    I realise OSX would be unsupported by both RH and Apple but I was just curious if you’d tried this under RHEV yet- obviously not.

    I’ll let you know if RH come back to me with the forbidden benchmarks!

Agree? Disagree? Something to add to the conversation?