Ethernet Storage vs Fibre Channel Storage

If you’ve followed me at all via my blog, trade shows, or industry whitepapers then you know I work for the storage company with the big blue “N” for a logo. You probably also know how much enjoy working at the the big blue “N”. That being said, I’m about to step away from the party line. And I’ll give you a hint – this article is heavily slanted towards Ethernet.

People ask us all the time, which protocol is best for virtualization? The official answer is “The one(s) that you feel comfortable with, Mr. Customer.” It’s really not a cop out, as we’re actually comfortable with all of them. The truth is that the protocol is not the solution, it’s only the conduit to the goodness that is contained within the storage with the big blue “N”. Here’s where I break from the party.

I don’t like Fibre Channel.

There, I said it! Whew, there’s a load off of my chest!! There’s actually a few reasons why I don’t care for Fiber Channel. For starters, it’s the requirement for specialized equipment that isn’t good for much more than storage only applications. Yes, I know you can do other things with it, but do you really use FCP for anything but SAN? As we consolidate and virtualize, FC-specific equipment seems to fall short of my requirements.

Maybe it’s that you have to train for & maintain yet another technology. Don’t get me wrong, I LOVE to learn knew things. But let’s face it, we’re in the business of simplifying infrastructure, manageability, supportability, and that means streamlining technologies where it makes sense. Why support 5 operating systems when 3 will get you there? Storage protocols and their equipment are no different.

I worked at a major ISP years ago where I had to support 4 flavors of Unix (and their respective hardware, 1 flavor of Linux, and Windows. Not to mention all of the different versions and releases and the different x86 server vendors. Someone high enough finally had the wisdom to cut it down to 2 Unix vendors, 2 x86 server vendors, Linux, and Windows. Draw your own comparisons.

But my disdain for FC also comes from the road map for Fibre Channel. Especially when you compare it to the road map for Ethernet. If you take a quick peek at http://www.fibrechannel.org/roadmaps, you will see that 32GB FC is “here”. At least the standard is, but I haven’t heard much about it. I certainly haven’t had customers ask about it. If you take a gander at the industries leading HBA vendors, they’re not exactly leading the charge for 32GB either. We’re looking at beyond 2014 for anything realistic beyond what’s currently available.

FC is firmly rooted in 16GB for the foreseeable future, and support for 32GB (and beyond) is more than a ways off.

What about the Ethernet road map? According to the IEEE at http://ieee802.org, we’re much further along. We’ve already got 10GbE in many data centers, 40GbE is available, and companies like AT&T and Verizon are already testing 100GbE – http://www.zdnet.co.uk/news/networking/2011/03/04/verizon-starts-standards-based-100gb-ethernet-rollout-40092031/ . Juniper is already shipping 100GbE routers. There are others too.

So even if we only look at this from a road map and speed standpoint, Ethernet has a much brighter future. But what about stability? You mean “The Myth” that Ethernet storage networks are less stable than Fibre Channel storage networks?

Mularkey. Baloney. And not to mention it’s a crock.

Why are FC networks so stable? They’re stable because they get planned, designed, deployed, managed, and maintained in an orderly and methodical fashion. Guess what? If you force the same criteria for your Ethernet storage networks as you would for your Fibre Channel, the stability would be equal. Honest. Besides, many of those myths were born out of running on 1GbE, non-enterprise NFS, early releases of iSCSI, or some combination of them. We’re FAR beyond that now. If you haven’t tried 10GbE iSCSI or 10GbE NFS from an enterprise controller, you haven’t performed your due diligence.

Let’s go back to the thought that the protocol is not the solution. As I mentioned earlier, it’s only the conduit to the goodness that is in the storage solution. I’m a huge believer in the products that my employer sells, but maybe you like a different brand. I’m totally cool with that. Honest. If there are features that you use that save you storage space, ease your management, and solves problems for you (i.e. goodness), then you understand the concept that the storage protocol is only a conduit. It’s only meant to be the means that your servers attach to the goodness. If your protocol of choice is not based on the solution provided by your storage, then you’re doing it wrong.

So lets tie this thing up. We’re in the virtualization business and this means consolidation, simplification, and still providing a high-level of support for our customers. You can’t run your entire network (data and storage) on Fibre Channel. However, you can do storage (SAN and NAS) and data over Ethernet and still get to your enterprise storage goodness, while simplifying your infrastructure. And you’re not taking anything away options from the folks you support.

In other words, just say “no” to FC.

8 thoughts on “Ethernet Storage vs Fibre Channel Storage”

  1. In HPC (and I am quite familiar with a 400 node installation for seismic data analysis at Stanford University) and some ISPs, Infiniband is a popular choice. Price point wise it’s also less expensive than 10Gbps Ethernet per port. Do you mind sharing your take?

    1. Hi Chin,

      IB may be less expensive per port, but what about overall? Doesn’t IB require separate routing and such? Also, my understanding is that IB tops out at 50GbE. We’re starting to see 40GbE in the data center, and even 100GbE on the horizon. But ultimately, I really like the concept of “converged networking” and “converged infrastructure”. IB doesn’t have widespread support, which makes it difficult to include it in a converged environment.

      Captain KVM

      1. Thanks for sharing your take.

        “Doesn’t IB require separate routing and such?” – you use IB specific switch, just like you use Ethernet specific switch. As far as routing is concerned, an IB based infrastructure is no different from an Ethernet based one. The biggest and practical current drawback IMHO so far is cable length restrictions.

        “Also, my understanding is that IB tops out at 50GbE. We’re starting to see 40GbE in the data center, and even 100GbE on the horizon.”

        Please see http://www.infinibandta.org/content/pages.php?pg=technology_overview, so for IB, 300Gb/s is on the horizon.

        The founder of Gluster once commented in his blog that he liked the low latency (vs. what 10Gbps Ethernet is capable of) offered by IB based infrastructure . You know of Red Hat stacks well, and Gluster is part of RH now, so I am sure that you are aware of this.

        “But ultimately, I really like the concept of “converged networking” and “converged infrastructure”.” – I concur 🙂

        In any case, I see IB’s use in HPC stays strong in the foreseeable future. Thanks.

  2. Our HPC compute clusters use IB as the node-interconnects because the latency is so much lower than 10GbE. Latency is king our our HPC application. How does the latency compare between IB, 40GbE, 100GbE, and the fibre channel options?

    1. Hi Alan,

      I understand your question and your use case.. I don’t have the immediate answer, but I will find it for you if I can!

      Captain KVM

Agree? Disagree? Something to add to the conversation?