It’s been several weeks (eons in internet time) since I posted anything. Holidays, shutdowns, health issues, and my day job have kept me away from the blogosphere. And Twittersphere as well. (Is that really a word??)
One of the other things that has kept me busy of late is the rebuilding of my KVM and RHEV lab in RTP. The abridged version is that my servers were attached to another groups storage. Additionally, that storage was the “old school” version of Data ONTAP (now referred to as “7-mode”). My servers have been relocated to a different area of the lab that belongs to my group. More importantly, the storage is very much “new school” and is referred to as “Clustered ONTAP”. I’ve had a couple of posts on it in the last few months.. I highly recommend you check them out.
On to RHEV 3.1!! This was the real catalyst for moving my lab: hosting RHEV 3.1 on Clustered ONTAP. Some of my next few posts will cover different things related to many of the new features in RHEV 3.1 and how the work (or maybe don’t..) with Clustered ONTAP. So now that I’ve finished putting my lab back together again, I want to start sharing my thoughts.
- Installation – This was ~very~ straightforward. Register to RHN, subscribe to the relevant software channels, then use `yum` to install.
- Deployment – Again, this was ~easy~. `rhevm-setup` runs you through the initial setup.
- Configuration – Most of this was straightforward. The one piece that tripped me up was the new procedures for configuring the network interfaces on the hypervisor nodes.* Mostly, this was because I was going thru the out-of-band connection to the server. Once I had it figured out, it was easy.**
- Deploying RHEV-H – This hasn’t changed much since the last release at all. Very easy.
I only this week got this up and running and ~just~ starting deploying VMs, so this is really all I have to report so far.. And then I’m travelling for the next 2 weeks, so I won’t have the chance to start playing with features until the last week of January.. However, my initial impressions are really good. The interface remains clean and uncluttered. As stated above, the install, deployment, and configuration are straightforward and easy.
*BTW, the ability to change the MTU size for jumbo frames when you initially setup a logical network was one of the bugzillas that I submitted. Before, the only way to adjust MTU size was to go to each hypervisor and configure it manually..
**Yes, I could have looked at the instructions, but I like to see how intuitive things are.. If I’m forced to look something up, then it’s likely not intuitive.
As far as how I deployed it, I stuck to my guns. I typically deploy 2 RHEL+KVM hosts (RHEL 6.3 in this case) that host virtualized infrastructure apps such as RHEV-M, Kickstart, and other tools. These “infrastructure” thick hypervisors are not managed by RHEV. My thin hypervisors support my (simulated) production VMs.
Virtualizing my infrastructure applications on a separate pair of RHEL+KVM hosts does 2 things:
- Provides High Availability for RHEV-M as well as the ability to non-disruptively update or provide maintenance of the hypervisor and physical hardware
- Provides KVM hosts to do my other non-RHEV related testing and blogging.
This all also plays into my next official project for NetApp – developing and writing the best practices for RHEV 3.1 and Clustered ONTAP. Then that will be used for the next project, that I can’t announce yet…. 😉
Ok, so that’s all for now. I realize it’s not much, but it’s a start.
Hope this helps,