Using NetApp Cinder Drivers with OpenStack

Hi folks,

Earlier in the week I talked about how NetApp and OpenStack weren’t such an odd coupling of technologies. I wanted to take things just a little further in this post and show you how to use NetApp storage with OpenStack. Specifically, we’re going to use NetApp storage for Cinder and Glance. And while Glance won’t require anything special, Cinder will require a little driver configuration.

Let’s get to it.

I’m going to make a huge assumption that there is NetApp storage already configured – be it iSCSI or NFS and “7-mode” or “Clustered ONTAP”. Also, be sure to enable “deduplication” and Thin Provision the NetApp volumes. Both will help your storage maintain its girlish figure. If you have a bunch of VMs that are built from the same image, the only difference between the images are things like hostname, IP info, and maybe some log files.. (ok, a little more than that, but you get the idea). All of the binaries are the same.. Deduplication will fold all of the duplicate storage blocks back into the available storage, so 20 VMs built from the same image will use up MUCH less storage.

I should also mention that I used RHEL-OSP 3.0 (Grizzly), but this should work with anything “Folsom” and beyond. It’s also worth noting that the 4 drivers described in this article are all fully supported by Red Hat in their RHEL-OSP offering. The certifications were finalized this week. The drivers allow Cinder to use block or file storage from NetApp and take full advantage of offloading storage activities like cloning and snapshot to the storage controller.

To configure Cinder to use the NetApp drivers, copy one of the 4 blocks below to the bottom of /etc/cinder/cinder.conf, taking note of the name in the brackets. You’ll use that name exactly in a step below.. Also, there’s a big difference between “7-mode” and “Clustered ONTAP” (aka cmode, aka cDOT), so if you’re unsure, ask your storage admin.

[7modeDirectiSCSI]
volume_driver= cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
netapp_server_hostname=<management IP addr>
netapp_server_port=<80 or 443>
netapp_login=<mgmt. account, typically ‘root’>
netapp_password=<mgmt. account password>
[7modeDirectNFS]
volume_driver= cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
netapp_server_hostname=<management IP addr>
netapp_server_port=<80 or 443>
netapp_login=<mgmt. account, typically ‘root’>
netapp_password=<mgmt. account password>
nfs_shares_config=<file containing NFS export, such as/etc/cinder/shares.conf>
[cmodeDirectNFS]
volume_driver= cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
netapp_server_hostname=<IP or hostname of cDOT admin access>
netapp_server_port=<80 or 443>
netapp_login=<login for admin account, typically ‘admin’>
netapp_password=<password for admin acct>
nfs_shares_config=<file containing NFS export, such as/etc/cinder/shares.conf>
[cmodeDirectiSCSI]
volume_driver= cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver
netapp_server_hostname=<IP or hostname of cDOT admin access>
netapp_server_port=<80 or 443>
netapp_login=<login for admin account, typically ‘admin’>
netapp_password=<password for admin acct>

Again, no need to copy all of them, just the one you’re going to use. Then simple edit the variables to match your environment.

If using the NFS driver, you also need to create the /etc/cinder/shares.conf file, and have a single line that contains the <nfs_ip>:<export_path>, such as:

# cat /etc/cinder/shares.conf
172.20.45.50:/CinderNFS

Finally, at the top of the /etc/cinder/cinder.conf file, right under [DEFAULT], add the line “enabled_backends=<driver_name>”, based on the driver you’re using. It should match the name in brackets from the driver you’re using…

####################
# cinder.conf sample #
####################

[DEFAULT]
enabled_backends=cmodeDirectNFS

Then restart the volume service:

# service openstack-cinder-volume restart

If you’re using either of the NFS drivers, you can run the “mount” command, and the NFS export should be mounted.

And regardless of which storage protocol you are using, you should be able to run some basic Cinder commands to verify that you’re up and running:

# cinder create --display-name demoStore 2
# cinder list

Now, lets move onto Glance. In RHEL-OSP, there is no special configuration beyond mounting the NFS storage. There may or may not be a requirement in other OpenStack distro’s.

Assuming that your NFS export is already configured, simply mount the export to the images directory:

mount NetApp_IP:NFS_export /var/lib/glance/images

And like the NetApp storage for Cinder, be sure to thin provision the NFS export and enable deduplication on the volume.

As far as NetApp licenses, you’ll need Base and FlexClone in all configurations, then NFS and/or iSCSI depending on which one or both storage protocols. And if you’re using 7-mode vFilers, you’ll need the MultiStore(R) license.

So that’s it, really. If you want to read more about the NetApp drivers, check both of these resources:

  • http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/netapp-volume-driver.html
  • https://communities.netapp.com/docs/DOC-24892

hope this helps,

Captain KVM

P.S. – keep your eyes peeled for a new OpenStack project called “Manilla”. The Cinder project was originally meant for block storage, i.e., “cinder blocks”. This is where NetApp developed it’s drivers for both block and file, but it we’re in the process of breaking the file services drivers out to it’s own project. And what does a manila folder do? It holds files….

3 thoughts on “Using NetApp Cinder Drivers with OpenStack”

  1. Pingback: Derrick

Agree? Disagree? Something to add to the conversation?