Deploying RHEV 3.6 pt3 (Storage)

Hi folks,

This is a very quick follow up to my last post as I want to keep you moving along in your endeavors. We’ve deployed RHEV-M, we’ve added a RHEL hypervisor, but before we can provision any VM’s, we need to attach storage. In this case, we’re going to attach NFS.

Let’s get started.For anyone that has followed this blog for any length of time, you know I have a clear bias for Ethernet storage. Fibre Channel is fine if that’s what you have. It’s solid and it’s dependable. But it also requires specialized equipment and specialized knowledge that don’t typically translate well into automation, orchestration, or even the long term storage road map. (see my opinion on Ethernet storage vs FC storage.)

In this post, I’ll focus on NFS only because its what I have setup in my lab. The 3 big things that you’ll want to keep in mind for NFS storage are IPtables, NFS export ownership, and host lookup. All are critical if you want to keep your frustration level down. If all 3 are set properly, RHEV will mount the NFS storage without issue.

In order for IPtables and NFS to get along, we need to pin down the NFS ports, otherwise they tend to jump around. It’s as simple as configuring the “/etc/sysconfig/nfs” file, then IPtables.  The NFS server needs “NFS” and RPC, while the client needs mount, stat, lock, and rquota. Once you configure the file, you’ll want to restart the server. In production, I highly recommend that an enterprise NFS server be used for performance and backups. But a lab or test environment can handle a Linux based NFS server just fine. Here are the port and IPtable assignments:

  1. Allow TCP and UDP port 2049 for NFS.
  2. Allow TCP and UDP port 111 (rpcbind/sunrpc).
  3. Allow the TCP and UDP port specified with MOUNTD_PORT=”892″
  4. Allow the TCP and UDP port specified with STATD_PORT=”662″
  5. Allow the TCP port specified with LOCKD_TCPPORT=”32803″
  6. Allow the UDP port specified with LOCKD_UDPPORT=”32769″

As for the directory ownership, it needs to be owned by “KVM”. A simple `chown 36.36 /export/directory` on your NFS server before any NFS mount attempts will do the trick.

Host lookup is best handled by DNS in production, but a simple host file configuration will do the trick as well in a small lab environment.

Let’s move into the demo itself. Again, we have to have at least one hypervisor to mount the storage; RHEV-M is just the broker. I intentionally fail to set the NFS export ownership just to show what the error looks like. From there, the NFS export is mounted and activated. Once the data domain is activated, we can activate the ISO domain and I show the process of uploading an ISO image.

That’s the second demo for the week. Next up, we start pumping out VMs!

hope this helps,

Captain KVM

4 thoughts on “Deploying RHEV 3.6 pt3 (Storage)”

  1. Hi,
    While RHEV works well with NFS, and is easy to set up. In some cases it can deliver dramatically better performance on iSCSI.
    We have lab setup where iSCSI is configured alongside NFS. On that setup, running intensive-I/O tasks such as template cloning on NFS can take as much as twice the time it takes on iSCSI.

  2. Great Blog Captian
    Have a question regarding Netapp and rhev-m. I have been trying to import an iSCSI storage domain from our production site to our DR site.
    The iSCSI storage domain am importing is from a replicated volume mirrored to out DR Netapp appliance.
    I have scoured the internet for clues on how to successfully import the said volume which has some VMs on it.
    I was wondering if this scenario is familiar to you and whether you’ve encountered it.
    Cheers

    1. Hi Christopher,

      Sincere apologies for the delayed response.. the final days before and during Red Hat Summit take a lot. To your question, you are seriously jarring my memory from my NetApp days! 🙂 There are definitely a few things to consider.. You need to know that in RHEV 3.x, you’re not going to (easily) just import a datastore of VMs (don’t panic!). I have an old NetApp Technical Report (TR) that defines how to do DR with NetApp and RHEV 3.x.. In short, I recommend you have a separate datastore for your RHEV-M (and related services).. that way you can fail over your RHEV-M, bring it up, then bring up your production VMs. NFS is easy – you simply break the SnapMirror relationship, then make your “read-only” volume “read-write”.. SAN requires an extra step that is VERY important – each LUN has a unique ID. Clones don’t get the original ID, even for DR. So in the case of RHEV 3.x you need to include this step in your DR procedures:
      Log the ID’s for the LUNs at “site A (primary)” and “site B (DR)”
      Before you make the DR LUN “active”, give it the corresponding LUN ID from the Primary LUN that it is acting for
      Bring up storage at DR
      Bring up RHEV-M at DR
      Bring up production clusters at DR
      Fail back procedures are in reverse order

      You can do a google search for “NetApp TR RHEV” and you’ll find some older documents that write it all out… the versions of NetApp ONTAP and RHEV will be out of date, but those procedures were solid. You may also want to check w/ NetApp to see if you still need to do that. Also know that RHEV 3.x is retired.

      Hope this helps,

      Captain KVM

Agree? Disagree? Something to add to the conversation?