Comparing RHEV, vSphere, & Hyper-V – pt2

Last week, we took a look at some of the cost-related comparisons between RHEV, vSphere, and Hyper-V. That’s all fine and dandy, but to make the most of the comparison, and really put it into perspective, we need to show what you get for the money.

If cost was the only factor, we’d all be driving Kia’s

And while performance is great too, it’s not the only consideration. If that were the case, we’d all be driving Porche’s, Ferrari’s, etc. The point being is that there are always a number of factors in any major decision, whether its a car or a virtualization platform.

Today’s post starts to show what you get for your virtualization investment. And while the previous post was a clear shot across the bow in regards to cost, this week’s post is potentially less clear. Not because I can’t explain it (I can), but because it’s all about what you need in your environment and what’s important to you (Only you know that). I have a clear winner based on what is important to me (feature-wise), but you may have some other features that you want.

So, first off, I’ll show you some of the basics in terms of hardware and virtual hardware limits. It’s important to understand that while both vSphere and Hyper-V have different tiers (Enterprise, Enterprise Plus, Standard, Data Center, etc), RHEV does not. When you purchase a support subscription for RHEV, you get everything. vSphere and Hyper-V may require you to purchase a more expensive product to get the same functionality.

                RHEV 3.0 vSphere 5 Hyper-V
Hypervisor Max CPU 160 physical CPUs 160 cores 64
Hypervisor Max RAM 2TB, 64TB theoretical 2TB 2TB
Guest Max CPU 64 32 4
Guest Max RAM 512GB 1TB 64GB
Guest Max Storage Dev 16 25 60 4
Guest Max vNIC 16 10^7 12
Max Hosts per Mgmt 500 1000 unclear
Max ActiveGuests per Mgmt 10000 10000 384

Ok, so right off the bat, Hyper-V doesn’t even compare. Period. Keep in mind that in the last post I showed where Hyper-V cost almost as much as vSphere. It’s like paying double for a “Yugo” because it’s got leather. Beyond that, RHEV and vSphere are close in Hypervisor RAM, and maximum Active Guests per management portal. vSphere has it made in the shade regarding maximum number of storage devices and maximum vNICs in guests. But do you really have a use case for a VM needing more than 16 vNICs? (If you do, feel free to post it in the comments section.)

Let’s move on to some of the actual features. And yes, I’m aware that Red Hat, VMWare, and Microsoft all post similar things on their sites and spin it in a way that favors their product. That’s fine, but this is my blog, and this is what’s important to me.

RHEV 3.0 vSphere 5 Hyper-V 2008 R2
Small Footprint Hypervisor yes yes no
Memory Overcommit yes yes no
Page Sharing yes yes no
Processor Hardware Memory Assist yes yes no
VLAN Support yes yes Requires Host & Guest Config
Jumbo Frames yes yes yes
High Availability for VMs yes Requires Advanced or Higher Requires Windows Clustering
Maintenance Mode yes yes no
Live Migration yes Requires Advanced or Higher Requires Windows Clustering
Shared Resource Pools yes yes yes
Cluster Resource Policies yes Enterprise & Enterprise Plus no
Thin Provision Guests yes yes yes
Templates yes yes yes
Central Management yes yes Requires Multiple Products
AD Integration yes yes yes
RBAC Policies for Admins & Users yes yes yes

If you look at the Hyper-V column, you’ll notice a lot of “no” and “requires…” type comments. I’m not likely going to include Hyper-V in many more posts. Or maybe I’ll just leave it as an example of what not to buy. (To be fair, when the next version of Hyper-V comes out, I will play nicely and write an update.)

Moving on, (almost*) all of the other comparisons put RHEV on par with vSphere in the areas that are important to me. Yes, vSphere has many more components that I have not listed and many of those do not have an analog in RHEV. But again, (almost all of *) the pieces and features that are important to me (memory sharing, VLAN support, live migration, etc) are all covered.

For less money.

What about the “*”, you say? What about the most important feature? You’ll have to come back for the next post. 😉

Hope this helps,

Captain KVM

27 thoughts on “Comparing RHEV, vSphere, & Hyper-V – pt2”

  1. You missed a number of important features in your comparison. The first is distributed switching (or centralised switch configuration) which is a feature of Enterprise Plus in vSphere and standard in RHEV. Second is live storage migration, again a feature of Enterprise Plus, but completely missing from RHEV (rumoured to make an appearance in 3.1). As a large user of vSphere in my job but personally having a preference to RHEV I keep up with the first two but have no interest in the third. Any reason not to include Xen?

    Also some of the features in vSphere that are available in Enterprise and Enterprise Plus are included by default depending on the licensing agreement you have with VMWare (splitting hairs I know).

    1. Hi Peter,

      Thanks for taking the time to check out the post, and more importantly, respond. You bring up some great points. In terms of which features I chose to compare, it really came down to things that I consider “core”. If it doesn’t have “X”, it isn’t enterprise. Things like Live Migration and memory page sharing. And yes, I will concede that the list of “core” features is largely subjective.

      In any case, the distributed switch is an interesting case. vSphere does in fact make use of a software-based switch, with a lot of functionality. No, RHEV does not have one. But it does provide similar functionality in the software-based bridging. One can still carve an interface into VLANs and designate which hosts and/or VMs have access to which networks. And yes, there are plans for storage migration to make an appearance very soon in RHEV. Why? Because enterprise customers like yourself said it was important.

      This is a critical point to make – that people like you can have a HUGE impact on RHEV. Whether you submit bugs or feature requests to RHEV or via oVirt (upstream). I guarantee you that getting a feature through to RHEV is easier than getting a feature into vSphere at this point. A great example is the distributed switch. If there is functionality in the distributed switch that is absent in RHEV, then open a feature request. Great minds think alike. There are most likely others that are requesting similar features.

      But you also make one of my other points – everything that RHEV has, it offers in the base price. You’re not required to purchase additional products, licenses, or add-ons. This is a big deal to me.

      And I won’t knock you for splitting hairs. You’ve made some good points.

      thanks for your time.

    2. Peter,

      I was driving into work this morning and realized I left one of your questions unanswered. Why do I not include any comparisons for Xen? It’s a fair question, to be sure. It’s actually for a couple of reasons:

      * From a Linux perspective, it’s a total hack (in my opinion). Anytime a new kernel feature is to be taken advantage of, the Xen bits have to be shoe-horned back into it. In order to take full advantage of para-virtualization (via Xen), you have to alter the host kernel and the guest kernel. It’s not efficient or scalable. KVM in in contrast, is just a module that loads into the the host kernel.

      * From an architectural perspective, I’m not a fan of DomU’s having to go through DomO for I/O. Very inefficient. So is having to write/maintain separate Xen related I/O drivers.

      * From an upstream perspective, it’s limited. I know folks from Oracle and Citrix have said that “all of they key” Xen bits are in the upstream kernel, but that is a stretch at best. There is still a separate modified kernel for the Xen hypervisor. Sure, Dom0 can be an unmodified distro as long as it’s based on the Linux 3.0 kernel, but that doesn’t change the requirements for Dom0.

      * From a solution standpoint, it doesn’t make sense to me. If you’re a Red Hat shop, go with RHEV or KVM. If you’re a Windows shop, go with RHEV or Hyper-V. If you’re already a VMware shop, then maybe look at RHEV. It made a little more sense when Red Hat was behind Xen, but Red Hat placed it in “step-child” status as of RHEL 5.4 and formally disowned it in RHEL 6.

      In a prior life, I worked with Xen and didn’t really like it. Then KVM came along. I’ve dismissed Xen ever since, for better or worse.

      thanks again for taking the time to reply,

      Captain KVM

    3. Working on 3.1 Version of RHEV.. No Idea how to configure vDistributed Switch.. Is this a feature(like openvswitch) yet to be added?

      1. Hi Kalirajan,

        RHEV 3.1 only supports the use of Linux bridging. Upcoming versions of RHEV will include support for Neutron (as in OpenStack network provider). This will allow folks like yourself to plug-in Open vSwitch or any of the other more feature rich networking frameworks. It is supported in the newest upstream release (oVirt 3.3), so I expect it to be too much longer before it makes its way downstream to RHEV.

        Hope this helps,

        Captain KVM

  2. As for rhev 3.0 “High Availability for VMs”, ‘not really’ would be more correct. What it really does is that if the VM is not running, then the engine starts it up immediately. But this is of course not high availablity. Internally the engine calls it ‘autostart’, honestly that name sounds like more accurate.

    Thanks for the comparison anyway, learned a lot from it!

    1. Hi,

      I get what you’re saying but it’s really semantics. High Availability doesn’t necessarily mean that it runs non-stop. VMware HA is not much different.

      Captain KVM

      1. Agreed. HA != FT.

        Legitimate comment that RHEV may not handle an FT equivalent (personally I’m not sure as I’m just starting to look at RHEV). Either way it may fall out of the “core” services or the not included except in the highest license models argument from above.

    1. Hi Brant,

      Thanks for taking the time to stop by and comment. I’m familiar with ‘openswitch’. I wasn’t trying to insinuate that there wasn’t an open source software switch out there, only that Red Hat doesn’t specifically support any of them (yet).

      We’ll see what happens in the next year..

      Captain KVM

  3. I’m majority in SMB implementations, and taking cost and performance into account, with basics like backup times, restore times, performance monitoring, i’m moving from vsphere to KVM (Proxmox). performance is huge for me. more performance = less hardware=less cost plus no initial cost without support contracts, even with support contracts, KVM cost is much lower. KVM Rocks.

  4. I liked the blog and the articles.
    Is there any reason for the Solaris Zones not to be in the “mesh”
    KVM was recently ported to Openindiana/Illumos/SmartOS – and when you add the zones, as para-virt method, the awesome features of ZFS, the DTrace and the newly ported KVM, you have more powerful solution.
    http://smartos.org/
    Thank you for the interesting articles.
    All best,
    Stoyan
    PS: when mentioning Solaris, I had also to point out the not-so-bad performance of “headless” VBox guests on UNIX hosts.

    1. Stoyan,

      Thanks for reading and commenting. To be honest, there isn’t enough time in my day to cover all of the virtualization solutions out there. My main focus is KVM as deployed w/ Red Hat and Fedora, and the series of comparison articles were really just the other big virtualization names in the data center. It’s not meant as a slight to the other technologies out there.

      Captain KVM

  5. Many important (but non-buzzword, thus often ignored) features are missing from RHEV 3.0, which is why I decided against it for now.

    For example, RHEV VMs could only reside on one type of storage, you couldn’t hot-add much of anything, you can’t have VMs on an isolated vlan, and you couldn’t move VMs from one network to another without shutting it down!
    These were really unfortunate omissions considering KVM of course can do all this. I’d also argue these probably matter far more to most customers than something like live storage migration, which was always in the the press as the “one big missing feature”.

    RHEV 3.1 addresses most of my complaints, so I’ll be taking a look at that soon.

  6. All 3 Blogs is very useful and i can see your clear choice as RHEV same as for me ( Based on requirement i have) . Redhat does offer KVM (Kernel/host based, thick host, RHEL 5.6-6.3 hypervisor, No cost for virtulization) and RHEV ( Bare Metal, thin hypervisor, subscription based on no. of socket). I like to know which one is cost effective and good performing between these 2 if i count it for say 3 years. Reason why asking is that though KVM may be cost effective as no cost for virtulization but will require certain % of hardware always allocated to run and further may not provide as good performanc as RHEV.

    1. Hi Rajesh,

      Thanks for taking the time to post some comments and questions. Here is how I view the difference in use cases:

      RHEV-M – This is great for when you have a need for a ready made management platform, but still leaves room for integration via Python SDK, the RESTful API, and the upcoming RHEV plug-in framework. It’s also great if you’re virt admins are not as savvy on Linux as the thin hypervisor (RHEV-H) is extremely easy to install, not to mention that kernel tuning and security are pre-optimized for virtualization. Also, if you need VDI, then RHEV is the way to go.

      RHEL+KVM – This is great for environments that already have everything they need for management, monitoring, reporting, and lifecycle management as RHEL+KVM can be plugged right in. Engineers and admins can customize the hypervisor for specific needs. And while there isn’t the RESTful API or the Python SDK, if you can talk to RHEL, you can talk to KVM (it’s part of the kernel).

      As for performance, they’re going to be close – it’s still KVM under the covers in both cases. RHEV just takes it a step further with the thin hypervisor and the management tools. RHEV 3.0 uses RHEL 6.2 as the base, and there have been some improvements included in RHEL 6.3 and 6.4 (currently in Beta). RHEV 3.2 will be rebased with RHEL 6.4, RHEV 4.0 will be rebased with RHEL 7 (late next year).

      Hope this helps,

      Captain KVM

  7. Hey Cap’n KVM,
    I’d love to see an updated post on your opinion of the current / just released versions of vSphere (5.1), Hyper-V (3.0) and KVM. New features and cost differences from all of them I would expect that could point more people to one solution vs another.

    Cheers

    1. Andrea,

      I think that’s a great idea. I may just do that once RHEV 3.1 comes out. (Due out in December.)

      Captain KVM

  8. I’m looking at the table and you say that the guest max cpu and max memory would be higer in RHEV 3.1 then for RHEV 3.0 I’m looking at the documentation of RHEV 3.1 and RedHat is telling me that the limits are the same for 3.1 and 3.0. Are you talking about support limits or not?

    https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Virtualization/3.1/html-single/Hypervisor_Deployment_Guide/index.html#References_RHEL_6_RHEVH_Support_Limits_Guests

    1. Hi Jurrien,

      The article was accurate at the time of writing based on information that I was given at that time. The limits may still get bumped up in 3.2 or 4.0. In the mean time, I will update/edit the article.

      thanks for stopping by,

      Captain KVM

  9. Thanks for the very informative original post.

    I’ve tried hard to find a the upper limit on the maximum number of vNICs supported by KVM.

    Does RHEV 3.1 still have 16 vNICs as the maximum limit?

    I’m working on a possible networking application which might need more than 16 vNICs. Is this a KVM limitation or a max limit put by RHEV?

    Thanks,
    Fred

    1. Hi Fred,

      Thanks for dropping by! RHEV 3.1 now supports up to 30 virtual PCI devices, including vNICs. However, keep in mind that some of these virtual PCI devices will be taken up by PCI host bridge, ISA bridge, USB bridge, board bridge, graphics card, and block devices.. So assuming you only have one of each, that still leaves you with 25 devices. Theoretically, 256 device functions (bus, slot, memory balloon driver, etc) per guest available. However, this is limited in practice, to 30… See here for more details.

      Hope this helps,

      Captain KVM4

    1. Zogness,

      I’m usually fairly decent with grammar and spelling… Even so, this is an informal blog. As such, most of the articles are written in a “stream of consciousness”. In other words, in this forum, I’d rather capture the spirit of the topic and worry less on nailing grammatical errors 100%. I have 20 or so whitepapers that I put equal amounts pressure on technical and grammatical accuracy.

      Captain KVM

  10. Quick note: vSphere supports max 10 vNics, not 10 to the power of 7 vNics. The ^7 is frequently misunderstood, but it actually just references footnote number 7 in the source document the info was taken from. So RHEV has it beat.

    1. Hi Peter,

      Thanks for the update!! Normally I would go back and update, but that post (and version of RHEV) is really outdated. And the newest version of RHV, is MUCH better.

      Captain KVM

Agree? Disagree? Something to add to the conversation?