Deploying RHEV 3.6 pt3 (Storage)

Hi folks,

This is a very quick follow up to my last post as I want to keep you moving along in your endeavors. We’ve deployed RHEV-M, we’ve added a RHEL hypervisor, but before we can provision any VM’s, we need to attach storage. In this case, we’re going to attach NFS.

Let’s get started.For anyone that has followed this blog for any length of time, you know I have a clear bias for Ethernet storage. Fibre Channel is fine if that’s what you have. It’s solid and it’s dependable. But it also requires specialized equipment and specialized knowledge that don’t typically translate well into automation, orchestration, or even the long term storage road map. (see my opinion on Ethernet storage vs FC storage.)

In this post, I’ll focus on NFS only because its what I have setup in my lab. The 3 big things that you’ll want to keep in mind for NFS storage are IPtables, NFS export ownership, and host lookup. All are critical if you want to keep your frustration level down. If all 3 are set properly, RHEV will mount the NFS storage without issue.

In order for IPtables and NFS to get along, we need to pin down the NFS ports, otherwise they tend to jump around. It’s as simple as configuring the “/etc/sysconfig/nfs” file, then IPtables.  The NFS server needs “NFS” and RPC, while the client needs mount, stat, lock, and rquota. Once you configure the file, you’ll want to restart the server. In production, I highly recommend that an enterprise NFS server be used for performance and backups. But a lab or test environment can handle a Linux based NFS server just fine. Here are the port and IPtable assignments:

  1. Allow TCP and UDP port 2049 for NFS.
  2. Allow TCP and UDP port 111 (rpcbind/sunrpc).
  3. Allow the TCP and UDP port specified with MOUNTD_PORT=”892″
  4. Allow the TCP and UDP port specified with STATD_PORT=”662″
  5. Allow the TCP port specified with LOCKD_TCPPORT=”32803″
  6. Allow the UDP port specified with LOCKD_UDPPORT=”32769″

As for the directory ownership, it needs to be owned by “KVM”. A simple `chown 36.36 /export/directory` on your NFS server before any NFS mount attempts will do the trick.

Host lookup is best handled by DNS in production, but a simple host file configuration will do the trick as well in a small lab environment.

Let’s move into the demo itself. Again, we have to have at least one hypervisor to mount the storage; RHEV-M is just the broker. I intentionally fail to set the NFS export ownership just to show what the error looks like. From there, the NFS export is mounted and activated. Once the data domain is activated, we can activate the ISO domain and I show the process of uploading an ISO image.

That’s the second demo for the week. Next up, we start pumping out VMs!

hope this helps,

Captain KVM

2 thoughts on “Deploying RHEV 3.6 pt3 (Storage)”

  1. Hi,
    While RHEV works well with NFS, and is easy to set up. In some cases it can deliver dramatically better performance on iSCSI.
    We have lab setup where iSCSI is configured alongside NFS. On that setup, running intensive-I/O tasks such as template cloning on NFS can take as much as twice the time it takes on iSCSI.

Agree? Disagree? Something to add to the conversation?