OpenStack Installer (for RHEL-OSP) pt1

Hi folks,

I’m starting a new multi-part series on deploying the OpenStack Installer from Red Hat. Hopefully, this makes some folks very happy.. Undoubtedly, this makes others confused, and still others wonder what the big deal is.

Here’s the short answer as to why I’m headed in this particular direction: It’s a strategic, albeit temporary, direction for Red Hat.

(If you want to skip towards the end for how my particular lab is configured, go for it, but you’ll miss the back story..)

Backstory

Allow me to illuminate.. In the beginning, we just had “PackStack”, which was created for a simple, “all-in-one” deployment or basic POC. But as OpenStack (and RHEL-OSP) evolved, PackStack did not scale with it. Somewhere along the line some folks in Red Hat Consulting fell in love with Ansible as an installer, but it’s not officially blessed by Red Hat support, regardless of how well or not it works. In 2014, Red Hat acquired a phenomenal company called eNovance, who brought with them a killer install tool called SpinalStack; however, it is fairly involved just getting it setup… but it will dowhatever you want it to. (I hear it makes the best smoothies while you wait.) (just kidding.)

But then, none of these tools are Red Hat tools, and more importantly, none of these tools are even “OpenStack” tools. There aren’t any native OpenStack tools to install OpenStack right now. Ubuntu has DevStack and is working on JuJu. Mirantis has Fuel. Piston has… whatever Piston has. I just told you what Red Hat has, and I’m about to tell you about one 2 more…

But first let me take you on a brief history lesson. Remember compiling your own Linux kernel because you HAD to? Not because you wanted to, but because it didn’t come compiled? Most of you would say, “no”. And when did you have to pick and choose your packages carefully based on dependancies and such? If you were brought up in the Red Hat world, most of you have been spoiled by “Anaconda”, which is a slick if not imperfect installer for Red Hat Enterprise Linux, CentOS, and Fedora among others. The point is this: There is fun “geeking out” in hacking your own kernel, and then there is drudgery in “having” to figure out what works with what manually.

In other words, not many people want to be an expert in deploying Linux or Websphere or Sybase manually. They want to be an expert in running their app and/or business in an non-eventful manner that allows them to live a decent life for a decent wage. And that means automating things.

Same thing with OpenStack – yes we want to learn all we can about the “new shiny”, but at the end of the day, we don’t want to be experts in deploying OpenStack manually, especially if the end goal of OpenStack is to automate things.. that’s just wrong.

Enter “RHEL-OSP Installer” (or just OpenStack Installer) and “OpenStack on OpenStack (or OOO or Triple O). Triple O is meant to be a native installer for OpenStack. Lots of folks are involved in getting it up and running, but its not likely going to be out for 6 months to a year – it’s still in OpenStack Incubation. HP and Red Hat are the big backers for it, but don’t let that scare you.. it really is supposed to help regardless of the distro/flavor of OpenStack.

That’s why we’re working on the RHEL-OSP Installer, or OpenStack Installer. It’s the stop gap. We can’t expect folks to wait another 6 months plus. We can’t say, “RHEL-OSP is better, once you get past the crappy install experience.” It’s just not realistic. And I’ve had too many of my own customers tell me exactly that – yeah, it’s great, but it’s not a good install experience. Once it’s up, it’s up, but damn…”

The Captain’s Lab

So what do I have, why do I have it, and how is it set up to support some common RHEL-OSP deployments? I thought you’d never ask…

My lab is built around some fairly new stuff, but relatively inexpensive stuff. Some of it physical, some of it virtual. I’ll share lessons learned the entire way.

  • 1 x Tower with 8GB RAM, Intel i7 multi-core CPU, 1TB hard drive, 2 x 1GB NICs – OK, so originally, this was supposed to be the majority of the lab right here… Virtualize everything, dammit!! but it just didn’t work out that way. Nesting virtualization presented it’s own challenges, especially around networking. Not that I can’t solve it, but my primary purpose for the lab is to learn OpenStack Networking as my customers will see it in their deployments… Most of them won’t be nesting their deployments… The other reason is to share the knowledge with you kind folks.
  • 4 x Intel NUC systems with 8GB RAM, Intel i3 multi-core CPU, 60GB mSATA drive, 1 x onboard 1GB NIC, various USB to 1GB NIC dongles – these are killer. Fairly inexpensive at about $350 each depending on what components you get.. Amazon and Newegg have roughly the same prices and both have free shipping for ‘prime’ customers. The onboard NICs are PXE bootable, which is required, the USB to NIC dongles are about $20/piece, but worth it as you can expand to 2 or 3 NICs depending on the role of that particular NUC. The best part is, they don’t take up much space or power. They don’t throw off much noise or power.
  • Switches – I originally purchased a 24 port managed switch to get VLANs and lots of other features, but honestly I think it’s over kill. Especially on price. I’ve arranged to have it returned in favor of 3 smaller 8 port switches. Think about it.. in this particular home lab, all you really want to do is keep your different DHCP servers from competing with one another, and only 1 needs to connect to the outside world. In that particular case I have a wired/wireless router that NAT’s to the outside world. So, I’m getting money, space, and complexity back.
  • KV&M – Notice I put the “&” in there?? 🙂 I happen to have an old monitor left over and the USB keyboard and mouse that came with the tower.. so I have to switch between systems manually periodically… (like at boot time to hit f12 to PXE boot..)
  • What would I buy for the lab if a Newegg gift cert dropped in my lap? A 6-8 port USB KVM switch so I wouldn’t have to switch between the systems manually…

Ok, so what about the Network Layout?

That’s fairly straightforward. We’re generally working with 3 different networks here:

  • External network – Internet, Floating IPs, outside world, etc. Only the Public facing instances and and the public interface of the OpenStack installer have access here. In my lab it’s a 10.0.1.0/24 subnet, and it gets NAT’d at the cable router. At a real customer site, it would likely be “live dmz”.
  • PXE, Management, “default” network – All nodes have access here. In my lab it’s a 192.168.200.0/24 subnet. At a real customer site, it would likely be a private, locked down, non-routable network.
  • Tunnel network – Only Instances and the Neutron server (or Cloud Controller if the Neutron Server is collapsed into the Cloud Controller in a “2 node” configuration) have access here. This is private tunneled traffic typically handled by GRE or VXLAN. In my lab, it’s a 192.168.300.0/24 192.168.100.0/24 (thanks, Bryce!!) subnet. At a real customer site, it would likely be a private, locked down, network. It may be routable if it needs to go cross-site. (Optionally, it could also be shared with the management network..)

Slide2

UPDATE 12/19/2014 – I’ll have to redraw the diagram above. Essentially, even virtualizing the RHEL-OSP controller and Repo presented some network issues that didn’t rear it’s ugly head until the very end of the OpenStack deployment.. very disappointing.. While I generally like to troubleshoot something and see it through, I simply don’t have the time right now to do that. As I stated before, my focus here is two-fold – mimic deployments that my customers are likely to see and share knowledge with my readers.

The next post will hopefully include actually recorded footage of me deploying the installer, as well as me walking through how the tower (KVM server) was setup to support the VMs, and why I chose to virtualize the the OpenStack Installer and the RHEL7 Yum Repo.

Captain KVM

7 thoughts on “OpenStack Installer (for RHEL-OSP) pt1”

  1. 192.168.300.0/24 is not an IPv4 address. I assume you mean 192.168.200.0/24, and the “Tunnel network traffic” is carried over the physical PXE/management/default network. Networking has been (and still is) the hardest thing for me to wrap my mind around, so thanks for your post and efforts to explain!

    1. Bryce,

      You are absolutely correct and get a gold star for catching my typo. “x.x.300.x” is not valid for anything. For my PXE traffic, it should be 192.168.200.x, and for the Tunnel traffic should be 192.168.100.x. That way, you can create a new subnet (and VXLAN tunnel) for each new customer that comes your way. Whether you are a hosting provider and the customers are Pepsi and Coke, or you are running a private cloud and the customers are Marketing and Sales. You are not alone in feeling challenged on the networking piece!! Most of my customers are challenged as well.

      I’ll go back to the article and make the changes and give you credit.

      Captain KVM

    2. Bryce,

      2nd reply – As promised, I’ve corrected the article and given you credit. Thanks for pointing out the typo. As you work through the posts and the upcoming videos, you’ll see the 192.168.100.0 subnet used for the tunnel.

      thanks again,

      Captain KVM

  2. Sorry to bother you again, but could you identify what is the “upstream” of the RHEL-OSP Installer used in your blog posts? Staypuft? openstack-foreman-installer? How would one follow along using CentOS, foreman/katello, and friends?

    1. Bryce,

      You are absolutely not bothering me. I ~love~ getting comments and questions. Honest. Yes the upstream is the openstack-foreman-installer. “Staypuft” is the codename for the RHEL-OSP-Installer. By the way, once you get it up and running, you’ll see that you could easily setup install trees for Centos. As I mention in the early posts, the RHEL-OSP-Installer is really a “stop gap” tool. Meaning that the “packstack” tool was great for setting up a small environment for folks to kick the tires, but couldn’t do anything with “HA deployments”, upgrades, maintenance, etc. The RHEL-OSP-Installer added a very nice front end and workflow approach to things as well as satisfied the demand for “HA deployments”, but still doesn’t help with upgrades or maintenance.

      The so called “converged installer” that is due out in the June/July timeframe is supposed to handle all of that, and then some. We refer to it as “converged” because we actually have several tools in use right now, and want to take the best of each and put them together. Kind of a “one ring to rule them all”… 🙂 With any luck, I’ll start posting about that soon. But I’d like to get some other posts in soon as well.

      Captain KVM

  3. How’s that converged installer coming? 🙂

    In your posts you mention that the RHEL-OSP installer got an upgrade to RHEL7. Do you know if there’s an upstream build of el7 staypuft somewhere? The instructions on https://www.rdoproject.org/Deploying_RDO_using_Foreman appear to be a bit dated (Havana!), but thumbing through the el7 repos for more recent RDO releases indicates that support for el6 is gone and the el7 repo is missing the openstack-foreman-installer package.

    Thanks in advance!

    1. Hi Bryce,

      The updated name is RDO Manager and the downstream name will be RHEL-OSP Director. If you do a “google” search for RDO Manager, it should be the first result. It’s still under heavy development… RHEL-OSP 7/RHEL-OSP Director should drop at the end of July, with a follow on from RDO/RDO Manager soon after.

      Captain KVM

Agree? Disagree? Something to add to the conversation?