Is FCoE a viable option for SMB/Commercial?

Host Bus Adapter (Fibre Channel)

Image via Wikipedia

Since I work in the SMB/Commercial space as a TC, I routintely am exposed to mixed fabric environments.  With the advent of iSCSI, we’ve seen a proportional shift towards iSCSI as a reduced-cost block storage fabric.  Legacy (2Gb/s) fibre still has presence in specific markets but the uptake of 4Gb/s fibre has been slowing down.  With FCoE being announced as the next logical evolution of converged fabrics and 8Gb/s FC and 10G iSCSI working their way to availability, does FCoE make sense for SMB/Commercial markets?

Personally, I do see FCoE (and DCE/DCF to a larger extent) penetrating and “working” in the Commercial/SMB markets…there are some big milestones that need to happen, though, before that penetration can really happen.

  • Cost: the CNAs from Emulex and Qlogic are currently 5x’s the price of a solid 2 port FC HBA.  That’s way too pricey for SMB and is pushing it for Commercial.  Gen2 adapters (using converged ASICs) will show up in Q1/Q2 2009 and should drop the price point by about 20-30%.  That would make it more tenable.  Similarly, Cisco’s fully loaded pricing on the Nexus 5020 is $105,000.00 (give or take a few dollars)…considering that you’ll STILL need MDS-91xx series SAN switches and Catalyst IP switches in the environment, you’re looking at a decent investment somewhere north of $150,000.00 just to get it in the door.
  • Availability: The sister “dog” to Cost.  If it’s not out there, you’ll pay more and you’ll have a harder time with support and acquisition.  People who are going to roll over to FCoE need to know that it’s commercially viable and that they can move their entire NOC or DC over to that platform.  Conversely, if you don’t have the early adopters, you won’t drive down cost or drive up availability.
  • Support: So, similar to point A above, it’s good that the technology is here today, but who REALLY has FCoE solutions?  The Nexus, for now, is simply another appliance.  It’s neither a core switch nor an edge switch and still relies on fabric-specific devices to get stuff done.  From the CNA perspective, we’re still on Gen1…and those cards are physically HUGE (no low profile…yet), really HOT, and really pricey.

My Personal Take:

To dig even further on this… iSCSI doesn’t work as well as people think it does.  Most of my customers I have to warn about latency issues (again, astronomically higher PER LINK than FC) and bandwidth issues (80% of usable line speed, if that…typically, I rate it 60-80%).  You also need MORE ports to even get close to even performance with FC from a bandwidth perspective.  10G will assuage some of this by reducing latency and increasing bandwidth, but, even then, it’s not all rosy.  Further, most of the SAN guys don’t want to deal with networking topologies.  I can create a decent fabric for FC attached storage and hosts within 30 minutes (within ESX) and have it function extremely well.  I don’t have to grab IP addresses or iQDN names or deal with messy initiators (more of an Open Systems problem currently).  Drag, drop, commit. It’s that simple.

Check out the following Third I/O Report (commissioned by Emulex) for some level of validation as to what I just said: Third I/O Report: 10G iSCSI vs. 8Gb/s Fibre

Anyhow, those are my thoughts for this Tuesday.  Comments?

cheers,

Dave

PS> As a side note, Stuart Miniman has a quick blurb up @ his Tumblr site on FCoE.  Go here to check it out!

Reblog this post [with Zemanta]
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: