I’m starting a new multi-part series on deploying the OpenStack Installer from Red Hat. Hopefully, this makes some folks very happy.. Undoubtedly, this makes others confused, and still others wonder what the big deal is.
Here’s the short answer as to why I’m headed in this particular direction: It’s a strategic, albeit temporary, direction for Red Hat.
(If you want to skip towards the end for how my particular lab is configured, go for it, but you’ll miss the back story..)
Allow me to illuminate.. In the beginning, we just had “PackStack”, which was created for a simple, “all-in-one” deployment or basic POC. But as OpenStack (and RHEL-OSP) evolved, PackStack did not scale with it. Somewhere along the line some folks in Red Hat Consulting fell in love with Ansible as an installer, but it’s not officially blessed by Red Hat support, regardless of how well or not it works. In 2014, Red Hat acquired a phenomenal company called eNovance, who brought with them a killer install tool called SpinalStack; however, it is fairly involved just getting it setup… but it will dowhatever you want it to. (I hear it makes the best smoothies while you wait.) (just kidding.)
But then, none of these tools are Red Hat tools, and more importantly, none of these tools are even “OpenStack” tools. There aren’t any native OpenStack tools to install OpenStack right now. Ubuntu has DevStack and is working on JuJu. Mirantis has Fuel. Piston has… whatever Piston has. I just told you what Red Hat has, and I’m about to tell you about one 2 more…
But first let me take you on a brief history lesson. Remember compiling your own Linux kernel because you HAD to? Not because you wanted to, but because it didn’t come compiled? Most of you would say, “no”. And when did you have to pick and choose your packages carefully based on dependancies and such? If you were brought up in the Red Hat world, most of you have been spoiled by “Anaconda”, which is a slick if not imperfect installer for Red Hat Enterprise Linux, CentOS, and Fedora among others. The point is this: There is fun “geeking out” in hacking your own kernel, and then there is drudgery in “having” to figure out what works with what manually.
In other words, not many people want to be an expert in deploying Linux or Websphere or Sybase manually. They want to be an expert in running their app and/or business in an non-eventful manner that allows them to live a decent life for a decent wage. And that means automating things.
Same thing with OpenStack – yes we want to learn all we can about the “new shiny”, but at the end of the day, we don’t want to be experts in deploying OpenStack manually, especially if the end goal of OpenStack is to automate things.. that’s just wrong.
Enter “RHEL-OSP Installer” (or just OpenStack Installer) and “OpenStack on OpenStack (or OOO or Triple O). Triple O is meant to be a native installer for OpenStack. Lots of folks are involved in getting it up and running, but its not likely going to be out for 6 months to a year – it’s still in OpenStack Incubation. HP and Red Hat are the big backers for it, but don’t let that scare you.. it really is supposed to help regardless of the distro/flavor of OpenStack.
That’s why we’re working on the RHEL-OSP Installer, or OpenStack Installer. It’s the stop gap. We can’t expect folks to wait another 6 months plus. We can’t say, “RHEL-OSP is better, once you get past the crappy install experience.” It’s just not realistic. And I’ve had too many of my own customers tell me exactly that – yeah, it’s great, but it’s not a good install experience. Once it’s up, it’s up, but damn…”
The Captain’s Lab
So what do I have, why do I have it, and how is it set up to support some common RHEL-OSP deployments? I thought you’d never ask…
My lab is built around some fairly new stuff, but relatively inexpensive stuff. Some of it physical, some of it virtual. I’ll share lessons learned the entire way.
- 1 x Tower with 8GB RAM, Intel i7 multi-core CPU, 1TB hard drive, 2 x 1GB NICs – OK, so originally, this was supposed to be the majority of the lab right here… Virtualize everything, dammit!! but it just didn’t work out that way. Nesting virtualization presented it’s own challenges, especially around networking. Not that I can’t solve it, but my primary purpose for the lab is to learn OpenStack Networking as my customers will see it in their deployments… Most of them won’t be nesting their deployments… The other reason is to share the knowledge with you kind folks.
- 4 x Intel NUC systems with 8GB RAM, Intel i3 multi-core CPU, 60GB mSATA drive, 1 x onboard 1GB NIC, various USB to 1GB NIC dongles – these are killer. Fairly inexpensive at about $350 each depending on what components you get.. Amazon and Newegg have roughly the same prices and both have free shipping for ‘prime’ customers. The onboard NICs are PXE bootable, which is required, the USB to NIC dongles are about $20/piece, but worth it as you can expand to 2 or 3 NICs depending on the role of that particular NUC. The best part is, they don’t take up much space or power. They don’t throw off much noise or power.
- Switches – I originally purchased a 24 port managed switch to get VLANs and lots of other features, but honestly I think it’s over kill. Especially on price. I’ve arranged to have it returned in favor of 3 smaller 8 port switches. Think about it.. in this particular home lab, all you really want to do is keep your different DHCP servers from competing with one another, and only 1 needs to connect to the outside world. In that particular case I have a wired/wireless router that NAT’s to the outside world. So, I’m getting money, space, and complexity back.
- KV&M – Notice I put the “&” in there?? 🙂 I happen to have an old monitor left over and the USB keyboard and mouse that came with the tower.. so I have to switch between systems manually periodically… (like at boot time to hit f12 to PXE boot..)
- What would I buy for the lab if a Newegg gift cert dropped in my lap? A 6-8 port USB KVM switch so I wouldn’t have to switch between the systems manually…
Ok, so what about the Network Layout?
That’s fairly straightforward. We’re generally working with 3 different networks here:
- External network – Internet, Floating IPs, outside world, etc. Only the Public facing instances and and the public interface of the OpenStack installer have access here. In my lab it’s a 10.0.1.0/24 subnet, and it gets NAT’d at the cable router. At a real customer site, it would likely be “live dmz”.
- PXE, Management, “default” network – All nodes have access here. In my lab it’s a 192.168.200.0/24 subnet. At a real customer site, it would likely be a private, locked down, non-routable network.
- Tunnel network – Only Instances and the Neutron server (or Cloud Controller if the Neutron Server is collapsed into the Cloud Controller in a “2 node” configuration) have access here. This is private tunneled traffic typically handled by GRE or VXLAN. In my lab, it’s a
192.168.300.0/24192.168.100.0/24 (thanks, Bryce!!) subnet. At a real customer site, it would likely be a private, locked down, network. It may be routable if it needs to go cross-site. (Optionally, it could also be shared with the management network..)
UPDATE 12/19/2014 – I’ll have to redraw the diagram above. Essentially, even virtualizing the RHEL-OSP controller and Repo presented some network issues that didn’t rear it’s ugly head until the very end of the OpenStack deployment.. very disappointing.. While I generally like to troubleshoot something and see it through, I simply don’t have the time right now to do that. As I stated before, my focus here is two-fold – mimic deployments that my customers are likely to see and share knowledge with my readers.
The next post will hopefully include actually recorded footage of me deploying the installer, as well as me walking through how the tower (KVM server) was setup to support the VMs, and why I chose to virtualize the the OpenStack Installer and the RHEL7 Yum Repo.