OpenStack Installer pt8 – Full HA

Hi Folks,

We’re finally here. The Big Enchilada. The Full Monty. The Full HA. It’s actually not as scary as it seems. But it should be… what I mean is that we should take the time machine back a year so that you could do all of the HA stuff manually. That way you could feel the pain of your own typos as you set up 100 different services under cluster services.. The trepidation of starting things up after 3 weeks of following directions to the “T”, wondering “is this going to work” or “am I going to drop to the floor in fetal position as my co-workers mock me”?

That way, you could see what a luxury 90 minutes for a 4 node cluster is.. 🙂

Ok, so if you’ve followed the series up to this point, you know that we’ve now upgraded to RHEL-OSP 6 and the newer RHEL-OSP Installer. One of the biggest differences and upgrades that you will see is that the previous version of the Installer deploy Ceilomer or Heat when HA was chosen. It only deployed those services in non-HA. In the new version, as described in today’s post, everything that is fully supported by Red Hat is deployed and everything that is deployed is done so in an HA fashion.

BTW, write in the comments if you’ve upgraded and noticed some other differences.. like some things that have been taken out…

As usual, I’ve included a recorded demo/walk-through that I think you’ll find helpful. Still, a couple of pointers might be in order. Just as with anything else new and/or complicated, stick with the defaults and easy stuff first. Then make incremental changes. Build on your knowledge. That way, you’ll know what changed. “It worked when I chose VXLAN but not GRE”… not that there should be an issue there… My point is that OpenStack is complicated enough as it is, don’t go full gusto and throw in all the bells and whistles all at once. Besides you can’t learn it all at once. Get comfortable with installing it, then go after your natural interest (networking, storage, app deployment, etc..)

Anyway, back to the deployment. In the non-ha deployment, I used local storage. In the HA deployment, I went with NFS. It’s fairly straightforward, and in the enterprise if you’re using NetApp it’s awesome… (The NetApp Cinder drivers for block and file are great.) I just set up exports on my Installer host:

# cat /etc/exports
/cinder 192.168.200.0/24(rw,no_root_squash)
/glance 192.168.200.0/24(rw,no_root_squash)

# systemctl enable nfs-server.service
# systemctl enable rpcbind.service
# systemctl restart rpcbind.service
# systemctl restart nfs-server.service

Oh, and go ahead and set up users for Cinder and Glance on the NFS server too… but be sure that they match the UID and GID’s for what will be on the OpenStack nodes. Have the Cinder group own the Cinder export and the Glance group own the Glance export..

# id cinder
uid=165(cinder) gid=165(cinder) groups=165(cinder)
# id glance
uid=161(glance) gid=161(glance) groups=161(glance)

From there, just build it like you built the non-HA deployments. Have your subnets ready – a tunnel subnet and a public subnet. Be sure that all of your interfaces are recognized BEFORE you hit “deploy”. And remember, because your Controller nodes are your Neutron nodes too in this scenario, they need 3 interfaces each. Your Compute node(s) need 2 – one for PXE/Mgmt, one for tunnel.

The last thing you need to do before kicking it off is ensuring that all of your interfaces are configured properly. We’ve done this in the previous posts, but the video has a brief reminder, but only for one of the hosts..

Here we go (Remember, it’s better in full screen, just give it a second to focus):

Hope this helps,

Captain KVM

4 thoughts on “OpenStack Installer pt8 – Full HA”

    1. Actually, NFS is very much still a thing. When I left NetApp last June, file services were growing at the detriment to block. In the realm of cloud, FC doesn’t have much place as it’s too hard to automate and/or virtualize. Whereas iSCSI and NFS are happy to live within the existing TCP/IP stack. FC requires specialized switches, cabling, and training outside of the actual storage/network/cloud administration. NFS and iSCSI… you get the gist.

      Also, NFS provides easy access for multiple hosts without the need for a clustering FS.. so there’s that as well. Then there are the arguments about FC being faster and more secure, but that’s fallacy. If you apply the same rules/policies to your Ethernet storage network as you do your FC storage network, it’s just as secure. As for speed, 10GbE is pretty darn fast, with 40GbE available, and even faster on the horizon. FC can’t go beyond 16GB without a standards re-write that won’t likely be backwards compatible.

      NFS is alive and well, good sir!

      Captain KVM

  1. Do you have more information on the configuration of the three interfaces on the network nodes? I saw that you had the three networks defined, but would like to get more information on how to set them up, and how to add the three interfaces to the physical servers.

    I saw that you were using enp0s2u1 / enp0s2u2. Are these virtual NICs on the card?

    Thanks!

    1. Hi there,

      Thanks for stopping by. I apologize for the delay in the response, as I’ve been traveling. In my case, the 3 interfaces are all physical. The reason the 2nd and 3rd interface names look so funky (enpc0s2u1 and 2) is because they happen to be USB to Ethernet dongles. That’s just how they got named. If you have an onboard nic and and 2 PCI cards, you’ll have nic names that are a little more recognizable.

      Hope this helps,

      Captain KVM

Agree? Disagree? Something to add to the conversation?