Favorite New Features in oVirt 3.3

Hi folks,

Thanks for checking out my first new “real” post in quite some time. I’ve been neck deep in OpenStack since I got back from Red Hat Summit and I’ve barely had time to come up for air. I’d like to help announce the release of oVirt 3.3, that came out on the 16th (I’m only 4 days late in talking about it!!). There are a number of new features added and I’m going to highlight my 4 favorites below.

Before I dive into my favorite new features, I want to briefly remind folks why oVirt is important. First and foremost, it is the only fully open source end-to-end virtualization platform. It builds a real and viable ecosystem for KVM, and it represents choice in the data center. Additionally, it provides everyone a transparent view into the very active roadmap for Red Hat Enterprise Virtualization.

Ok, let’s move onto my favorite features. In no particular order:

  • Neutron integration – Neutron is the new name for Quantum network provider; the same one used in OpenStack (hint, hint). I like this because it provides real network capabilities beyond the Linux bridge utility. The Linux bridge utility’s biggest strength has also been it’s biggest detractor: simplicity. It’s insanely easy to setup, but that’s only because you can’t do much with it. On the other hand, Neutron allows for folks to take their favorite network plugins such as Open vSwitch, Cisco Nexus, the venerable Linux bridge, (and many others!) and use them with oVirt. Between Neutron and the plugins, you can add in things like QoS, VLANs, L2-L3 tunneling and many other necessary tools.  It also helps in the convergence of a traditional virtualization platform and OpenStack.
  • Migration network – This may sound like a “so what?”, but it’s not. This is actually very important, depending on the size of your virtualization environment and how many VMs you’re supporting. For the sake of argument, let’s say you’re running 200 VMs on 20 nodes of various sizes. If you’ve got your power saving thresholds and/or load balancing thresholds setup, you could actually flood your management network with migration traffic. This allows you to separate the migration traffic onto it’s own network or VLAN.
  • Disk Block Alignment Tool – If you go back to my very first post on this blog, you’ll see that I opened this blog with an article on the importance of proper file system alignment in virtual environments. We’ve seen as much as a 40% degradation in performance for mis-aligned filesystems. It’s an easy problem to avoid, it’s a difficult problem to fix after the fact. The good new is that it’s not an issue for RHEL starting with v6 (and I think Fedora 13, but don’t quote me), or for M$ operating systems starting in 2003 and beyond. Anyway, this tool allows you to view and fix mis-aligned filesystems for those legacy apps that you’re forced to run on RHEL 3, Centos 4, Fedora 5, or M$ 2k…  (I don’t envy you if you match any of those OS’s)
  • Self-Hosted Engine – this might actually be my favorite new feature. If you’ve read any of my Technical Reports on deploying RHEV with NetApp, I first applaud you for staying awake!!! Secondly, you will see a pattern of deploying RHEV-H and thick hypervisors on one side of an imaginary fence, and deploying the RHEV-M (oVirt Engine) and other management apps as VMs on RHEL 6 hypervisors on the other side. This allows for a “control plane” and a “compute plane”. But mostly it provides the ability to migrate RHEV-M (think oVirt Engine) from node to node for all the reasons that you would want to virtualize an application in the first place. The Self-Hosted Engine capability makes that separate control plane obsolete, and I’m fine with that. It allows for a much cleaner architecture. Instead of the control plane being a physically separate entity, it’s simply in a different RHEV (oVirt) cluster.

For more information on all of the new features, including deep dives I highly recommend a visit to: http://www.ovirt.org/OVirt_3.3_release_notes.

hope this helps,

Captain KVM

 

 

 

7 thoughts on “Favorite New Features in oVirt 3.3”

  1. Iirc the Self-Hosted Engine did not make it in 3.3 but it is available in the repo with the latest ‘n greatest alpha/beta packages (don’t recall what it’s called). Hopefully we’ll see it in 3.4 together with native glusterfs support on EL6.

  2. Is IT possible to hot clone/backup a running VM. So IT can be offloaded to backup storage? To my knowledge clone is only possible offline and snapshots is only incremental.

    Cheers!

    1. Hi Gerwin,

      From RHEV/oVirt you can in fact take a snapshot of a running VM. And you are correct in that cloning is an offline activity. Everything else requires enterprise storage. Before we go there, also understand that most of the time we don’t really care about the VM – we care about the application data. Whether it’s a database or whatever. The one exception that I can think of is if your VM is a dev workstation or similar. Even so, hopefully the important data is on an NFS export or other central.

      Even though I work for NetApp, let’s take a look at enterprise storage from a generic standpoint by talking about features that are somewhat common across the major players. Your data is not “backed up” until it is delivered offsite; after all, if your VMs and datastores are all local, a catastrophe will destroy your data and snapshots. Having your data backed up offsite mitigates that risk. In the “old days” we would create tape backups and send them to a secure facility. Restoring from backup meant scheduling a pickup from the secure facility, waiting, waiting some more, then hoping that it is a valid backup. Now, we use enterprise storage to do a site-to-site transfer of data, most likely a mirroring of storage volumes. This could typically be used to restore an entire volume or individual files – either way it is very quick and is typically measured in minutes.

      For example, lets say we have a database running on a VM, and the VM mounts all of the db files from an NFS export served from a storage controller. When it comes time to backup the database, you would put the db in ‘hot backup mode’, take a volume snapshot from the storage controller, then resume the database. Right after that, the storage volume (including any/all snapshots) are mirrored offsite. If something happens to the VM itself, you can likely stand up a new VM from template faster than you can restore the VM from backup.

      RHEV/oVirt does not have any ‘site to site’ capabilities at this time, and as you pointed out the RHEV/oVirt (hypervisor level) snapshots are incremental. In general, enterprise storage (storage level) snapshots captures a point in time (sometimes incremental, or both) copy of the volume (NAS or SAN). That makes for a much more efficient use of resources and people. If you needed to do full snapshot/clones from the hypervisor, it is very time consuming as it is a serial process. Enterprise storage controllers that snapshot the entire volume capture all of the VM or data.

      If this didn’t answer your question or if you have additional questions, please feel free to ask more questions!

      Hope this helps,

      Captain KVM

      1. Hi Cap!

        Thanks for the lengthy answer. Our Data domain is on a SAN storage system (no Netapp). Most of our servers get backed on file level too (if the customer has the SLA for that though).

        Just was thinking about get systems fast online again by offloading a “clone” to an offsite location in case of a BIG emergency 🙂

        But o/sVirt developement goes with lightning speed, so maybe in the future there will be a feature which can accomplish this.

        Thanks again!

        – Gerwin

Agree? Disagree? Something to add to the conversation?