Why is NetApp involved in oVirt?

In my last post, I described “the how” NetApp got involved with oVirt, but I really didn’t get into “the why”, except for leaving off with “integration”. And that is exactly where this post picks up.So what is integration? Are we talking commercial sales teams or open source board members sitting around a campfire singing “Kumbaya”? Or awkwardly bolting disparate technologies together? No. I’m not talking about duct tape and lipstick. I’m talking about beer and pizza. I’m talking about things that go together naturally – real, impactful integration. Integrating between a hypervisor and an enterprise storage array is what takes a datacenter from mere server virtualization to automation and scale on demand.

This is where NetApp’s involvement with oVirt is critical. The type of integration that I’m referring to here involves offloading all storage activities from the hypervisor to the storage array. And yes, I’ve covered offloading before, but I’m pulling it into specific context here.

For example, when you go to clone a virtual machine using the native cloning tools (any hypervisor), you have to make sacrifices. If you need the clone fairly quickly, you create a thin clone, but take performance hits on the VM. If you go with a thick clone, you save the VM performance, but the clone operation might take 10+ minutes. That’s hardly a formula for success. In either case, that cloning operation takes CPU cycles, memory, and I/O away from the hypervisor’s primary function – which is to run VMs.

In contrast, offloading that cloning activity to the storage array means that you can spin up several thick clones in seconds. And because we can thin provision the storage, your storage usage still looks good in that little black dress. Now you don’t have to sacrifice cloning speed or storage efficiency. Nor do you have to take resources away from your existing VMs. Better yet, the integration means that the cloning activity occurs either directly from RHEV-M or oVirt Engine, or from an API. The offload integration becomes just as easy as the native tools, but faster. It becomes the streamlined workflow required for real automation.

Now, expand that same concept to cloning entire datastore in seconds. Work against real data, without risking corruption, disruption, or affecting production. Want to see how your virtualized applications run on a different server platform, patch level, or network configuration? Clone the VM data stores as well as the application data stores, and mount the clones in your dev/test environment. Your testing time is just cut down from weeks to days. Change management just got less painful. (sorry, I can’t take all the pain away from change management…)

Here’s another big reason for NetApp to integrate with KVM-based virtualization: KVM is already part of the Linux kernel. It’s a light switch, so turn it on. If I’m a customer, I want to take advantage of that by way of real automation between my KVM-based hypervisor and my enterprise storage. If I’m already using Red Hat (or other distro) and NetApp, make it easy for me to put the 2 together.

But as I mentioned earlier, this offload integration means that you move your data center away from duct tape & lipstick server virtualization and towards beer & pizza automation that scales on demand.

So….. how is NetApp actually implementing this integration? Check back for the next post…

Captain KVM

Agree? Disagree? Something to add to the conversation?