RHEL 6, KVM, & NetApp Storage – updated!!!

Hi folks,

I’m happy to report that my updated technical reports around RHEL 6, KVM, and NetApp storage have been published. Whether you are looking for the best practices, how to deploy the best practices, or simply need something to get you to sleep (‘cuz the Ambien isn’t cutting it), these are for you!

TR-3848 RHEL 6, KVM, and NetApp Storage: Best Practices

TR-4034 RHEL 6, KVM, and NetApp Storage: Deployment Guide

Thanks again for following!


2 thoughts on “RHEL 6, KVM, & NetApp Storage – updated!!!”

  1. These guides are very useful, but its not clear what the recommended file/block layering is for thin provisioning of KVM guests when using iSCSI.

    Should we create one LUN per KVM guest (and partition as appropriate)? and put those LUNs in the same FlexVol to take advantage of DE-duplication? I believe live migration would still work in this scenario. We have:

    Raid Group -> Aggregate -> FlexVol -> FlexLun -> Partitions/LVM PG/VG/LV -> ext4 (guest fs).

    Or should be create a single (big) LUN with a large LVM Volume Group on top of that to hold the guest filesystems? Each guest would be a Logical Volume in the Volume Group.

    Raid Group -> Aggregate -> FlexVol -> FlexLun -> LVM Volume Group (KVM host) > LVM Logical Volume(s) (KVM host) -> ext4 (guest fs)

    1. Hi kvmer,

      Thanks for taking the time to reply. While there is no technical or support limit to prevent you from having a 1:1 LUN to VM ratio, it would likely be tedious to maintain. There are 2 primary ways of dealing with block storage – one of which you already mentioned.

      Managing the LUN as a volume group makes sense for several reasons. Again, you mentioned the first one – each VM is hosted as a logical volume. Next, if you need to grow the LUN (or add a LUN) on the NetApp side, you can easily extend the volume group. Not to mention the fact that you can easily extend the logical volume to grow the VM. This is actually how RHEV handles block storage.

      The other way that you could use block storage is to put a file system on it (heavily suggest LVM as well). The benefit here is that the VMs are simple files, just as they would be on an NFS volume. You may have concerns about multiple hypervisors writing to the same mounted file system.. However, it’s not like they are writing to the same file(s) simultaneously. The only concern would be putting in a safeguard to prevent multiple hypervisors from trying to run the same VM simultaneously – that will in fact corrupt the VM.

      Either one of these suggestions can still take advantage of dedupe, SnapShots, thin provisioning, etc.

      hope this answers your question,


Agree? Disagree? Something to add to the conversation?