Is FCoE a viable option for SMB/Commercial?

October 14, 2008
Host Bus Adapter (Fibre Channel)

Image via Wikipedia

Since I work in the SMB/Commercial space as a TC, I routintely am exposed to mixed fabric environments.  With the advent of iSCSI, we’ve seen a proportional shift towards iSCSI as a reduced-cost block storage fabric.  Legacy (2Gb/s) fibre still has presence in specific markets but the uptake of 4Gb/s fibre has been slowing down.  With FCoE being announced as the next logical evolution of converged fabrics and 8Gb/s FC and 10G iSCSI working their way to availability, does FCoE make sense for SMB/Commercial markets?

Read the rest of this entry »


Future Storage Systems: Part 3b – I/O Expansion Node

October 10, 2008

In Part 3a, we discussed the possibility of a purpose-driven Compute Node based on the Torrenza initiative for the Future Storage system.  This expansion node made use of Hypertransport as a “glue” between the base storage compute node and the expansion node (of computation or I/O flavours) that could be added.  The advantages of that topology were simple:  hot add support for additional processing power, additional I/O bandwidth within the system, and additional computing power for the array OS (which we’ll cover in a later article).  In this overview, we’ll take a look at another variation on an expansion node: an I/O expansion node that will add additional front-end ports and/or functionality to the base system.  We will be referencing the diagram below. (Apologies in advance for the image shearing off in the lower right hand corner).

Hypertransport I/O Expansion Topology

Hypertransport I/O Expansion Topology

Read the rest of this entry »


Why wouldn’t the following work? (Future Storage System: Part 1)

October 7, 2008

So, I’ve been toying around with this in my mind for some time.  Essentially, I’ve tried to understand the basic “Storage Processor” limitation of current storage systems and propose an admittedly simplistic design to get around some of the difficulties.  The biggest hurdle, in my mind, is to have cache coherency, low latency memory access to other nodes in a “cluster,” and have a communications “bus” between nodes that is extensible (or at least will grow bandwidth with more devices on the signal chain).  Staring at that problem, then, look at the image below.

A case for Hypertransport connected nodes...

A case for Hypertransport connected nodes...

Read the rest of this entry »


Off to Orlando…

September 30, 2008

Well, I’ll be in Orlando, FL for the next few days soaking up some good ‘ol EMC goodness (that’d be product, not sun 😉 ).  I should have some really interesting things to report on this month including more information on the CMIS “platform” (announced last month by EMC, IBM, and Microsoft) as well as some newer technology that’s coming in the the following months.  Also, will have a review of the excellent Qlogic 5802V 8gb/s fibre switch available online in a bit as well.

Stay tuned!  Things are going to be interesting.

cheers,

Dave

Reblog this post [with Zemanta]

Piggy-back concepts of “Greening” the datacenter

March 21, 2008

I’ve had a LOT of fun, lately, reading Mark Lewis’ blog (found here) as he delves into the green data center concepts. To rehash some of what has already been talked about to “green” your data center:

a.) Tier your storage. Higher speed spindles, by nature, consume more power. (Compare the specs for the Seagate Barracuda ES.2 Enterprise SATA drive to those of the Seagate Cheetah 15K.5 FC/SAS drives). By moving your data from higher speed spindles to lower speed spindles based on usage/access patterns within a larger system policy framework, you can keep power consumption low overall. Better yet, archive it off to a Centera and remove the need for tiering within the array to begin with. 😉
b.) Virtualize, Virtualize, Virtualize. Sure, it’s the “trendy” thing to do these days but, with the ability to collapse 30:1 (physical to virtual) in some cases, simply investing in VMWare (of course) will cut down on your power footprint and requirements. From the host side, using devices like Tyan’s EXCELLENT Transport GT28 (B2935) with AMD’s quad core Opteron processors allow for rack dense ESX clusters to be created that can scale to (get ready for it): 160 physical sockets/640 cores per 40U rack and 320 Gigabit Ethernet ports. I also forgot to mention that within these 1Us, you can install low profile 2port Qlogic QLE2462 4GB/s fibre cards to allow for multi-protocol attached storage to be used. *hint, hint* I think this would be a GREAT platform for the next EMC Celerra. 😉
c.) Use different storage media. By “different storage media,” I am referring to the availability of SLC/MLC flash drives and the pervasive use of 2.5″ fibre/SAS drives within the data center. I’ve already waxed eloquent before on the merits of using 2.5″ drives (lower power consumption, less moving parts, typically faster access times than comparable 3.5″ drives, etc.) and I’m anxiously waiting to see if EMC will adopt these drives for their arrays. With 2.5″ drives coming close in platter densities (500gb 2.5″ SATA drives are already available in the market), I think there is less of a reason to continue to use 3.5″ drives for nearline storage. Flash, on the other hand, while available in smaller quantities, takes the speed and power equation to a whole different level. I’ll let the Storage Anarchist explain the details:

“As you’ve probably read by now, the STEC ZeusIOPS drives themselves are in fact optimized for random AND sequential I/O patterns, unlike the lower cost flash drives aimed at the laptop market. They use a generously sized SDRAM cache to improve sequential read performance and to delay and coalesce writes. They implement a massively parallel internal infrastructure that simultaneously reads (or writes) a small
amount of data from a large number of Flash chips concurrently to overcome the inherent Flash latencies. Every write is remapped to a different bank of Flash as part of the wear leveling, and they employ a few other tricks that I’ve been told I can’t disclose to maximize write performance. They employ multi-bit EDC (Error Detection) and ECC (Error Correction) and bad-block remapping into reserved capacity of the drives. And yes, they have sufficient internal backup power to destage pending writes (and the mapping tables) to persistent storage
in the event of a total power failure.”

In any case, these are some quick notes from me this AM. Definitely am looking forward to delving into the Tyan GT28/AMD Quad Core stuff in the next few days.

Happy friday!

Technorati Tags: , , , , , , , , , , , , , , , , , , , ,