Vmware Dependency on Linux

Disclaimer – This is actually a re-post of a colleague’s Google+ post. I did not author this, nor am I claiming credit for this. Andrew Cathrow originally posted this on April 8. I just thought enough of it to repost it here. Additionally, while my preference/bias is clearly slanted towards KVM, VMware does have a solid product. They’re just not up front about their dependencies…

VMware has a lot of rather deceptive marketing around KVM and Linux.  They have gone to great pains to hide their complete reliance on open source and Linux in particular.  You may be saying to yourself “that was in the past but today with ESXi there is no more Linux because they removed the service console”.

You’d be wrong, actually you’d be really, really wrong.

It’s worth taking the time to download the 400MB tarfile that contains the open source code used in ESX5i – note this isn’t ESX with a service console it’s ESXi

So what will you find in there?

The first secret that so many people don’t know about is vmklinux.  This is a subsystem that runs on ESXi and provides a compatibility layer to load Linux device drivers. Take a look at the drivers in the vmkdrivers-gpl/vmkdrivers/src_9/drivers directory of this archive. Then compare the VMware hardware compatibility list if you still don’t believe it.  Yes, that’s right, VMware has a shim layer running in ESX so they can load Linux drivers in their proprietary kernel without worrying about that pesky old GPL license.

So that’s the driver layer – the hardware interface is derived from (GPL) Linux.  Have a look through the rest of the tar file – I guarantee you’ll recognize some other familiar packages – gcc, glibc, gtk, libusb, ntfsprogs, e2fsprogs, sblim, open-iscsi, procps and many others.

Obviously it’s not wrong to use Open Source code, especially if you’re complying with the license, but next time you hear VMware’s marketing team claiming that KVM is an inferior hypervisor because it’s based on Linux then you might want to remind them that ESX wouldn’t exist without it either.

6 thoughts on “Vmware Dependency on Linux”

  1. You’re obscuring the point. The claim is not that having Linux breathe on it makes it inferior. The claim — which I wholeheartedly support, and which I think makes the Xen model make more sense as well, whether or not you think the code is as good — is that scheduling resources for individual processes and for entire virtual machines as though they are the same things is absurd, and that an operating system that treats any workload, from an operating system instance with hundreds of processes to a single command line utility, in a general-purpose approach cannot be as useful as an operating system that is built solely to run operating systems. It is the whole “Linux kernel as the center of the universe” notion that is the weakness, not the presence of a subset of Linux code used tangentially where it is useful. The GPL drivers were intended a decade ago to be a stopgap till resources were available to do vmkernel drivers from scratch; the need to keep engineers working on the things that could be done better purpose-built, and the fact that Linux drivers and vmklinux offered a good-enough solution, eventually moved that out of the picture.

    1. Hi Roger,

      Thanks for taking the time to post and respond. I apologize for the delay in responding; I’ve been tied up with the day job.

      I’m honestly not trying to obscure anything.. the point of the post was that VMware is obscuring the fact that they have dependencies on things that they claim are inferior.

      As for Xen, it’s not an optimal design – virtual CPU’s and RAM go one direction, and I/O has to go through a specially created VM, and to get the best performance and benefit from “full virtualization”, both the hypervisor kernel and guest kernels have to be (heavily) modified in order to accept the Xen bits. There is no upstream kernel that contains all of the necessary components to run Linux as a Xen hypervisor or Xen guest. In the case of any non-open guest operating systems (Windows, Unix, etc) cannot be modified to run optimally (fully virtualized) on Xen; the only option is to run them “para-virtualized). And Linux Torvalds refuses to incorporate Xen fully into the Linux kernel, because it it wants to do unnatural acts to the Linux kernel.

      As for any approach that settles for “good enough”, how is that even acceptable? I can completely accept it as a “stop gap” and “we have to get this to market”, but the roadmap has to include “improvements to XYZ”.. that’s not just for VMware, that’s for any software vendor.

      This was the point of KVM. A tiny little company in Israel wanted to build a Windows VDI solution around Xen, and they found that it was not going to work for them (not good enough). In a year, they had written KVM and got it accepted in the upstream Linux kernel. Why? Because it doesn’t reinvent the wheel and it takes advantage of both the Linux kernel and CPU extensions built for Virtualization.

      It also means that not only is the hypervisor (in this case Linux kernel + kernel module) a true Type 1 hypervisor, but technically the VMs are bare-metal as well. Why? because they’re running directly on the the bare-metal as processes. Processes that have seamless access to resources as defined by the hypervisor. This also means that when you call a Linux vendor for support (Red Hat, SuSE), they make zero distinction between an application that runs natively or virtually; there is no “please reproduce on bare-metal”, because it already is on bare-metal.

      Captain KVM

      1. “As for Xen, it’s not an optimal design – virtual CPU’s and RAM go one direction, and I/O has to go through a specially created VM”… yes, less than ideal, but more so than having everything go through a Linux kernel. And the case can be made that while a scheduling decision for an entire VM is not the same as a scheduling decision for a minor-league utility, an I/O is an I/O is an I/O — meaning that since I/O is initiated by user processes for the most part, the needs of a user process inside a VM and a user process in the control domain are identical, so it’s not really an issue for I/O.

        “There is no upstream kernel that contains all of the necessary components to run Linux as a Xen hypervisor or Xen guest.” The cult of Linus. This really doesn’t matter a bit. By the time my meal hits the table, it doesn’t matter to me if it all came out of the same oven, or if part of it was cooked in the microwave. The purity of the Linux kernel is a silly issue.

        “A tiny little company in Israel wanted to build a Windows VDI solution around Xen, and they found that it was not going to work for them (not good enough).” Let’s be accurate here. Moshe Bar had a hissy-fit about being pushed out of XenSource and established that company in Israel, then had to do something that wasn’t Xen-based.

        “In a year, they had written KVM and got it accepted in the upstream Linux kernel. Why? Because it doesn’t reinvent the wheel and it takes advantage of both the Linux kernel and CPU extensions built for Virtualization.” You act like having it accepted into the upstream Linux kernel says much of anything about the quality of the approach. It only says something about the acceptability of the approach to people who believe the pristine purity of the Linux kernel is itself a goal, which it really isn’t to anyone other than Linus and his disciples. As for taking advantage of the Linux kernel, see previous point. As for taking advantage of the Intel and AMD hardware virtualization assist, Xen does it, but it saves that approach only for where it’s necessary. Paravirtualization is intrinsically a better approach when it can be used — it’s more efficient, and a kernel purpose-built for virtualization can make smarter decisions than one that relies on the CPU to make them for it.

        The “reproduce on bare-metal” issue simply need not happen. I was at VMware when we said that that was our policy, and I think you could count on the fingers of one hand the number of times that directive was made and still have enough fingers left over to bowl a strike. As for Red Hat, the decision to favor KVM followed years of Red Hat’s making it clear for XenSource and Citrix that they believed other open source companies should be development labs and the “mission” of commercialization should be left to them — when XenSource/Citrix refused to step back, the KVM “Plan B” was pulled out of the drawer. as for SuSE, Zen was their favored approach until their staff shrank sufficiently that they had to run with what kernel.org gave them — but that is an operational efficiency, not a technical one.

        Finally, your “running on the bare metal as processes” notion seems the most phenomenal twisting of what “running on” means that I’ve ever seen. A “bare metal” process would be one that doesn’t use an operating system or a hypervisor. Those processes are running within a hypervisor that is a driver of another operating system, and insisting anything else is clapping for Tinkerbelle.

        1. And pardon the typo of “Xen” and the capitalization error, not to mention that I missed the fact that my normal screenname was still filled in on the comment validation form.

        2. Roger,

          We’re simply going to have to agree to disagree. My blog is “CaptainKVM”, so I’m clearly going to have a KVM bias. It’s not perfect, but of all of the hypervisors I’ve played with and/or supported operationally, it’s my favorite. You have some deep seated animosity for KVM, or at least non-Xen/non-ESX.

          I don’t see either of us convincing each other to switch position.

          Captain KVM

Agree? Disagree? Something to add to the conversation?