Ethernet Storage vs Fibre Channel Storage

If you’ve followed me at all via my blog, trade shows, or industry whitepapers then you know I work for the storage company with the big blue “N” for a logo. You probably also know how much enjoy working at the the big blue “N”. That being said, I’m about to step away from the party line. And I’ll give you a hint – this article is heavily slanted towards Ethernet.

People ask us all the time, which protocol is best for virtualization? The official answer is “The one(s) that you feel comfortable with, Mr. Customer.” It’s really not a cop out, as we’re actually comfortable with all of them. The truth is that the protocol is not the solution, it’s only the conduit to the goodness that is contained within the storage with the big blue “N”. Here’s where I break from the party.

I don’t like Fibre Channel.

Continue reading “Ethernet Storage vs Fibre Channel Storage”

Mixing Thick and Thin Hypervisors in a RHEV 3 Environment

If you’ve checked out both RHEV 3 as well as running KVM on RHEL 6, you’ve no doubt found pros and cons for going one way or the other with your hypervisors. RHEV-H offers a simple appliance approach that is already tuned and configured. KVM on RHEL 6 allows for customization while still offering the benefits of the high-speed KVM hypervisor. Which one should you use?

Simple. Use both. Continue reading “Mixing Thick and Thin Hypervisors in a RHEV 3 Environment”

RHEV 3.0 is Live!!

As of today, Red Hat has released RHEV 3.0 which adds some HUGE improvements as compared to RHEV 2.x. Here are some of the highlights:

  • RHEV-M installs from command-line on RHEL 6 (no Windows host dependency)
  • RHEV-H rebased from RHEL 5 to RHEL 6 (gaining several years of improvements)
  • RESTful API

A new upstream project to pull from – oVirt.org (like Fedora for RHEL)

I’ve been using the RHEV 3 beta in his lab for several months and hasn’t had so much as a hiccup. I can’t wait to switch to the G.A. release (new toys!!!).

Congratulations to the RHEV team for this release. For more information on RHEV 3.0, go to :

http://www.redhat.com/promo/rhev3/?intcmp=70160000000U4lOAAS

thanks,

c.k.

Supporting Multiple RHEV 3.0 Environments Simultaneously

So you’re planning out a single environment around the soon to be released 3.0 version of RHEV. You know you’re going to eventually have multiple environments by the end of the year, and each environment will need to have varying levels of separation. You know you can set up IPtables and SELinux, but that’s host and VM level. VLANs provide additional virtualization, but again, that’s at the network level.

What can we do at the storage level that will support and complement the levels of virtualization and separation found at the compute and network layers that would support multiple RHEV environments? Would this make high availability, scaling out, scaling up, and load balancing more difficult?

What if I told you that you could virtualize a NetApp controller, and thereby answer, “yes” to all of the questions above? Continue reading “Supporting Multiple RHEV 3.0 Environments Simultaneously”

Fedora 16 Release Video

Hi all,

If you haven’t had the chance to investigate Fedora 16 (released a month ago), I highly encourage you to view the video below. Some of the Fedora engineers talk about some of the new features around the new Gnome release, online account management, new QA tools, and the community influence that makes Fedora great. Continue reading “Fedora 16 Release Video”

Offload VM Cloning from KVM to the Back-end Storage

A few weeks ago, I addressed the concept of using the back-end storage to clone RHEL boot LUNs as opposed to repeated Kickstarts.  The reasoning is simple enough – cloning on the backend is significantly faster that creating from scratch.  Work smarter, not harder.. Right?

So what happens when we apply the same logic to individual virtual machines? In a word?

“Magic”. Continue reading “Offload VM Cloning from KVM to the Back-end Storage”

Don’t kickstart – Clone!!!!

So you have the perfect “Golden Image” built by way of a kickstart file.  You have a dedicated network for your installs and your boot disk is a NetApp LUN.  You’ve set every conceivable tunable to ensure that every subsequent build via PXE install is under 10 minutes, including post configurations in “%post”.

Guess what.  I can beat it by 9.5 minutes.  Consistently. Continue reading “Don’t kickstart – Clone!!!!”

At the oVirt Workshop in Santa Clara, CA….

Captain KVM is spending part of this week in Santa Clara, CA for the oVirt Workshop along with about 80 other attendees all eager to dive into all of the various projects that oVirt represents.

There are representatives from SUSE, IBM, NetApp, Cisco, Canonical, Red Hat, Intel, and others – as well as some non-corporate folks here as well. This week is dedicated to getting board members together, introducing new members of the oVirt community, and getting specific people involved in specific projects.

For more information on the workshop and oVirt in general, visit the site at http://ovirt.org

ck

Booting RHEL 6 from NetApp iSCSI

Today we’re going to take a look at booting RHEL 6.1 x86_64 from a NetApp iSCSI LUN, but first we’re going to take a look at why we would want to do this in the first place.  Sure, it’s cool from a techie standpoint, but to do this in the data center requires a compelling argument. Continue reading “Booting RHEL 6 from NetApp iSCSI”