Updates (sort of)

April 13, 2008

I’m trying to keep tabs on the influx of various searches and technologies that are out there in the storage world. To that end, I’m going to do a couple of things:

a.) In the not-so-distant-future, you’re going to see me doing a lot of video presentations on this blog. I’ve got a long commute to work (> 45 minutes on most days) and I’ve got a perfect “mount” for a video camera on my dashboard. I think I’ll call the series “Storage Drive-bys” (get it?) and I’ll try to keep it to subjects that you search on (i.e. Symmetrix, EMC core, Clariion, Centera, Celerra, EDL etc. etc.) This could be a LOT of fun, so, we’ll see what happens. In fairness to you, I’ll post the disclaimer at the beginning of each video but I’ll be honest about what I think regarding each relative technology. Deal?

b.) I’m also going to keep up with a weekly “respond to your search” posting that will attempt to answer the searches (based on the stats logging I see through WordPress) that I deem most “interesting.” Stuff like “weird science Dave Graham” and “scribd” will probably NOT make the cut. 😉

With this in mind, I’m off to study for my certification exams…



Piggy-back concepts of “Greening” the datacenter

March 21, 2008

I’ve had a LOT of fun, lately, reading Mark Lewis’ blog (found here) as he delves into the green data center concepts. To rehash some of what has already been talked about to “green” your data center:

a.) Tier your storage. Higher speed spindles, by nature, consume more power. (Compare the specs for the Seagate Barracuda ES.2 Enterprise SATA drive to those of the Seagate Cheetah 15K.5 FC/SAS drives). By moving your data from higher speed spindles to lower speed spindles based on usage/access patterns within a larger system policy framework, you can keep power consumption low overall. Better yet, archive it off to a Centera and remove the need for tiering within the array to begin with. 😉
b.) Virtualize, Virtualize, Virtualize. Sure, it’s the “trendy” thing to do these days but, with the ability to collapse 30:1 (physical to virtual) in some cases, simply investing in VMWare (of course) will cut down on your power footprint and requirements. From the host side, using devices like Tyan’s EXCELLENT Transport GT28 (B2935) with AMD’s quad core Opteron processors allow for rack dense ESX clusters to be created that can scale to (get ready for it): 160 physical sockets/640 cores per 40U rack and 320 Gigabit Ethernet ports. I also forgot to mention that within these 1Us, you can install low profile 2port Qlogic QLE2462 4GB/s fibre cards to allow for multi-protocol attached storage to be used. *hint, hint* I think this would be a GREAT platform for the next EMC Celerra. 😉
c.) Use different storage media. By “different storage media,” I am referring to the availability of SLC/MLC flash drives and the pervasive use of 2.5″ fibre/SAS drives within the data center. I’ve already waxed eloquent before on the merits of using 2.5″ drives (lower power consumption, less moving parts, typically faster access times than comparable 3.5″ drives, etc.) and I’m anxiously waiting to see if EMC will adopt these drives for their arrays. With 2.5″ drives coming close in platter densities (500gb 2.5″ SATA drives are already available in the market), I think there is less of a reason to continue to use 3.5″ drives for nearline storage. Flash, on the other hand, while available in smaller quantities, takes the speed and power equation to a whole different level. I’ll let the Storage Anarchist explain the details:

“As you’ve probably read by now, the STEC ZeusIOPS drives themselves are in fact optimized for random AND sequential I/O patterns, unlike the lower cost flash drives aimed at the laptop market. They use a generously sized SDRAM cache to improve sequential read performance and to delay and coalesce writes. They implement a massively parallel internal infrastructure that simultaneously reads (or writes) a small
amount of data from a large number of Flash chips concurrently to overcome the inherent Flash latencies. Every write is remapped to a different bank of Flash as part of the wear leveling, and they employ a few other tricks that I’ve been told I can’t disclose to maximize write performance. They employ multi-bit EDC (Error Detection) and ECC (Error Correction) and bad-block remapping into reserved capacity of the drives. And yes, they have sufficient internal backup power to destage pending writes (and the mapping tables) to persistent storage
in the event of a total power failure.”

In any case, these are some quick notes from me this AM. Definitely am looking forward to delving into the Tyan GT28/AMD Quad Core stuff in the next few days.

Happy friday!

Technorati Tags: , , , , , , , , , , , , , , , , , , , ,

Scheduling blog entries…

August 17, 2007

One of my favourite blogs is done by the Oracle Storage Guy. His subject knowledge, post clarity, and humour lend themselves to an easy understanding of the subject matter that he is covering for a given post.  To that end, I’m going to lay out a consistent (I hope) weekly blog list of what I’d like to cover (at a high level) with deep-dives where necessary.  Note: this list isn’t exhaustive (as if it could be) and, like anyone else, I do have a life outside of EMC and technology (I’m a guitarist 🙂 ). So, without further ado, here’s my subject lists:

  1. CAS.  Content Addressed Storage overview. Hopefully will be able to tackle the “Why CAS?” question I hear a lot.
  2. Centera.  Why Centera represents a “best fit” in the CAS market (as I see it).
  3. Infiniband. Is there a resurgence of Infiniband?
  4. Hypertransport. Focus on version 3.x revision of this system bus/interconnect topology.
  5. Hypertransport, part 2.  How Hypertransport is shifting interconnect technology beyond the system bus.
  6. Hypertransport, part 3.  Why Infiniband failed at becoming a system bus contender and forward thinking issues with CSI (common system interface; Intel bus)
  7. OpenFabric Consortium
  8. RDMA, IPoIB, etc.  Hey, I’m open to suggestions here. What do YOU want to know about?

I think this is about as forward-thinking as I get this go-around. Stay tuned. You won’t want to miss it.



Technorati Tags: , , , , , , ,

Powered by ScribeFire.

Joining the Fray…

August 17, 2007

Just thought I’d introduce myself as the latest blog persona to float into the Inter-ether. The name is Dave Graham and I’m an avid storage fan.  Coming from a background of business analytics, social psychology (more on that later), and IT consultancy, I’m glad to finally be “at home” with an employer who challenges my concepts of “good” storage and pushes me to become the best I can be.  Whew!  That was an exceptionally long-winded intro but, captures who I am.  Moving forward, I hope to be able to indulge you in two of my distinct passions:  high-speed, low latency interconnects for storage (Hypertransport/LDT, Infiniband, PCIe, et al.) and *drumroll please* CAS (especially as it relates to EMC Centera.  I welcome all feedback as I pursue these passions and, hopefully, along the way, we can have serious dialogue.



Technorati Tags: , , , ,

Powered by ScribeFire.