Updates (sort of)

April 13, 2008

I’m trying to keep tabs on the influx of various searches and technologies that are out there in the storage world. To that end, I’m going to do a couple of things:

a.) In the not-so-distant-future, you’re going to see me doing a lot of video presentations on this blog. I’ve got a long commute to work (> 45 minutes on most days) and I’ve got a perfect “mount” for a video camera on my dashboard. I think I’ll call the series “Storage Drive-bys” (get it?) and I’ll try to keep it to subjects that you search on (i.e. Symmetrix, EMC core, Clariion, Centera, Celerra, EDL etc. etc.) This could be a LOT of fun, so, we’ll see what happens. In fairness to you, I’ll post the disclaimer at the beginning of each video but I’ll be honest about what I think regarding each relative technology. Deal?

b.) I’m also going to keep up with a weekly “respond to your search” posting that will attempt to answer the searches (based on the stats logging I see through WordPress) that I deem most “interesting.” Stuff like “weird science Dave Graham” and “scribd” will probably NOT make the cut. 😉

With this in mind, I’m off to study for my certification exams…

cheers,

Dave

Advertisements

Centera vs. Symmetrix? (and the SAS vs. Fibre challenge)

April 1, 2008
Obviously, I do actively read and/or manage my blog. To that end, one of the nifty little features of WordPress (and undoubtedly other blogging sites) is the ability to “see” what search terms people are using to land on your blog posts. One of the most fascinating searches had to do with the phrase “Centera vs. Symmetrix.” There are other good search metrics that I’ve seen but I thought I’d delve into this for a second.
Centera vs. Symmetrix
As you’ve undoubtably read before, I did a quick drive-by of the Nextra and in it, promoted the concept that Nextra could become a significant competitor to EMC’s Centera. While this may be slighting the Nextra and Centera somewhat, it does point to the fundamentals of near-line archive being a significant battleground in the coming years. So, to flip this on its head a little, let’s look at the Centera vs. the Symmetrix as wholistic entities dedicated to storing YOUR information.The Symmetrix is a purpose-built, multi-tiered storage system with infinite expandability (well, finite, really, but hyperbole works well, right? ;) ) , connectivity, and AT LEAST 3 tiers of discrete information storage (Tier 0 [SSDs], Tier 1 [fibre], Tier 2-5 [SATA]). The Symmetrix will connect to anything from mainframes to lowly Windows 2003 Servers. It has completely redundant pathways to your data and features a high speed internal bus interconnecting the blades.The Centera is a system based on the RAIN (Redundant Array of Independent Nodes) principle. By itself, the Centera is realistically nothing more than a purpose-built 1U server with specialized policy-based software sitting on top of a very stable Linux OS. (The Centera guys will more than likely want to harm me for distilling it down that far). However, moving the Centera “nodes” from standalone to clusters (aka 4-node “base” units) really changes things and highlights the power of the OS and hardware. Connectivity is limited to IP only (GigE, please!) and the nodes communicate with each other over IP (dedicated private LAN) as well. Not quite as flexible as to the front end connectivity and definitely not the champion of speed by any stretch of the imagination (thanks to SATA drives), but very servicable when using the API to communicate directly. Remember, the Centera is geared toward archive, not Tier 0-3 application sets (though, it appears to function quite well at the Tier 2-5 levels depending on the application).

Hopefully, you’re seeing a pattern here that will answer this particular tag search. If not, here’s the last distillation for you:
Symmetrix
: multi-protocol, multi-Tier, high speed storage system
Centera
: single protocol, single-Tier, archive storage systemCapiche? ;)

SAS vs. Fibre Challenge

Again, as I’ve pontificated before, I challenge anyone to point out SAS’s shortcomings as it pertains to reliability and performance vs. fibre drives. I see the market turning to SAS as the replacement for Fibre drives and, well, we’ll see where that goes. To that end, I’ve got an interesting challenge for you readers:

The Challenge:
a.) I need someone with a CX3-10 and someone with an AX4-5 base array, with fibre drives and SAS drives respectively.
b.) I need the fibre and SAS drives in a RAID5 4+1 config with a single LUN bound across it (no contention of spindles
c.) I need you to run either the latest version of IOMeter or OpenSourceMark (the FileCopy Utility) against that LUN and report back the information.
d.) I’ll compile the table of data results and, if I receive valid results from multiple people, I’ll send you an EMC t-shirt for your time (to the first responders).Sound like a deal? GREAT!(I’d do it myself but I have no budget for these things…)

Checking out now…

Dave

Technorati Tags: , , , , , , , , , , , , , , , , , ,


Piggy-back concepts of “Greening” the datacenter

March 21, 2008

I’ve had a LOT of fun, lately, reading Mark Lewis’ blog (found here) as he delves into the green data center concepts. To rehash some of what has already been talked about to “green” your data center:

a.) Tier your storage. Higher speed spindles, by nature, consume more power. (Compare the specs for the Seagate Barracuda ES.2 Enterprise SATA drive to those of the Seagate Cheetah 15K.5 FC/SAS drives). By moving your data from higher speed spindles to lower speed spindles based on usage/access patterns within a larger system policy framework, you can keep power consumption low overall. Better yet, archive it off to a Centera and remove the need for tiering within the array to begin with. 😉
b.) Virtualize, Virtualize, Virtualize. Sure, it’s the “trendy” thing to do these days but, with the ability to collapse 30:1 (physical to virtual) in some cases, simply investing in VMWare (of course) will cut down on your power footprint and requirements. From the host side, using devices like Tyan’s EXCELLENT Transport GT28 (B2935) with AMD’s quad core Opteron processors allow for rack dense ESX clusters to be created that can scale to (get ready for it): 160 physical sockets/640 cores per 40U rack and 320 Gigabit Ethernet ports. I also forgot to mention that within these 1Us, you can install low profile 2port Qlogic QLE2462 4GB/s fibre cards to allow for multi-protocol attached storage to be used. *hint, hint* I think this would be a GREAT platform for the next EMC Celerra. 😉
c.) Use different storage media. By “different storage media,” I am referring to the availability of SLC/MLC flash drives and the pervasive use of 2.5″ fibre/SAS drives within the data center. I’ve already waxed eloquent before on the merits of using 2.5″ drives (lower power consumption, less moving parts, typically faster access times than comparable 3.5″ drives, etc.) and I’m anxiously waiting to see if EMC will adopt these drives for their arrays. With 2.5″ drives coming close in platter densities (500gb 2.5″ SATA drives are already available in the market), I think there is less of a reason to continue to use 3.5″ drives for nearline storage. Flash, on the other hand, while available in smaller quantities, takes the speed and power equation to a whole different level. I’ll let the Storage Anarchist explain the details:

“As you’ve probably read by now, the STEC ZeusIOPS drives themselves are in fact optimized for random AND sequential I/O patterns, unlike the lower cost flash drives aimed at the laptop market. They use a generously sized SDRAM cache to improve sequential read performance and to delay and coalesce writes. They implement a massively parallel internal infrastructure that simultaneously reads (or writes) a small
amount of data from a large number of Flash chips concurrently to overcome the inherent Flash latencies. Every write is remapped to a different bank of Flash as part of the wear leveling, and they employ a few other tricks that I’ve been told I can’t disclose to maximize write performance. They employ multi-bit EDC (Error Detection) and ECC (Error Correction) and bad-block remapping into reserved capacity of the drives. And yes, they have sufficient internal backup power to destage pending writes (and the mapping tables) to persistent storage
in the event of a total power failure.”

In any case, these are some quick notes from me this AM. Definitely am looking forward to delving into the Tyan GT28/AMD Quad Core stuff in the next few days.

Happy friday!

Technorati Tags: , , , , , , , , , , , , , , , , , , , ,


It Ain’t Easy Being Green…

August 22, 2007

Reading through my feeds today, Clark over at StorageSwitched! published this entry back on August 3rd.  In it, he discusses various methodologies being used by vendors to “market” (my words, not his) or pander to the “green consciousness” of IT departments worldwide. In his thinking, adding an additional tier of storage might assuage some of the guilt administrators feel pressured into by various marketing wonks from F500.

I, for one, happen to agree with this methodology.  Just as we have the “Energy Star” label for devices that are energy efficient (albeit along an predetermined continuum), it might be worthwhile to have these storage devices tagged by overall power consumption per I/O, etc. However, there are things that might skew the results.

Exhibit A would be the virtualized server appliance.  So, we’re running virtualization software (a la VMWare) on a server and have two quad core Xeon or Barcelona processors running approximately 8 virtual apps.  Already, you’ve managed to utilize 1/8th (running average) of the original power foot print of 8 separate single processor servers.  However, this new virtual box would obviously consume a prodigious amount of power that might not fit within the footprint of an energy efficient device.  Conversely, there are various system level components that can make use of power saving features (AMD’s PowerNow! or Intel’s SpeedStep, for example) but if you’re pulling down 500w on a single server at idle (and 750w under load) compared to more efficient dual core or single core designs, then what advantage, outside of data center consolidation, are you gaining?

Exhibit B would be the consolidated data center.  As I mentioned yesterday, Sun created a very interesting proof-of-concept “Data center in a Box” that makes use of water and high-efficiency cooling mechanisms to lower the power threshold environmental impact of a given data center.  Also, through the use of highly parallel, multi-core/multi-threaded processors within the server (and the OS/software to match), you gain more efficiency per processing cycle which can (and routinely does) translate into more performance per watt. This has been one of AMD’s historic arguments that, in recent months, has fallen slightly on its face as it has been eclipsed by Intel in IPC and nominal power draws.    I find it fascinating to start entering data against AMD’s Platform Power Estimator and seeing the monetary returns, etc. (yes, it’s AMD biased).  Back to the data center, though.  On a larger scale, it does make sense to collapse 10 servers into one to increase operational efficiency but if the surrounding architecture isn’t managed along with it, what difference have you really made?  This article at SearchStorage points to data center that changing infrastructure to promote green computing.

My two skew points enumerated above (the virtualized server appliance and the data center) are simple points of comparison and honestly, I’ve probably made a muddle of things in my explanation.  What I’m curious about is how YOU would go about changing YOUR data centers and/or products to encourage environmental responsibility.  For an EMC example, we recently introduced our Centera 750GB LV product.  Using Intel Sossaman processors, the power footprint of the device is significantly lowered compared to previous generations of product.  That’s green storage in action.  Also, you can see The Storage Anarchist’s take on the DMX4 and it’s power footprint.

The gauntlet has been gently tossed; any takers?

Cheers,

Dave

Technorati Tags: , , , , ,

Powered by ScribeFire.


Scheduling blog entries…

August 17, 2007

One of my favourite blogs is done by the Oracle Storage Guy. His subject knowledge, post clarity, and humour lend themselves to an easy understanding of the subject matter that he is covering for a given post.  To that end, I’m going to lay out a consistent (I hope) weekly blog list of what I’d like to cover (at a high level) with deep-dives where necessary.  Note: this list isn’t exhaustive (as if it could be) and, like anyone else, I do have a life outside of EMC and technology (I’m a guitarist 🙂 ). So, without further ado, here’s my subject lists:

  1. CAS.  Content Addressed Storage overview. Hopefully will be able to tackle the “Why CAS?” question I hear a lot.
  2. Centera.  Why Centera represents a “best fit” in the CAS market (as I see it).
  3. Infiniband. Is there a resurgence of Infiniband?
  4. Hypertransport. Focus on version 3.x revision of this system bus/interconnect topology.
  5. Hypertransport, part 2.  How Hypertransport is shifting interconnect technology beyond the system bus.
  6. Hypertransport, part 3.  Why Infiniband failed at becoming a system bus contender and forward thinking issues with CSI (common system interface; Intel bus)
  7. OpenFabric Consortium
  8. RDMA, IPoIB, etc.  Hey, I’m open to suggestions here. What do YOU want to know about?

I think this is about as forward-thinking as I get this go-around. Stay tuned. You won’t want to miss it.

Cheers,

Dave

Technorati Tags: , , , , , , ,

Powered by ScribeFire.


Joining the Fray…

August 17, 2007

Just thought I’d introduce myself as the latest blog persona to float into the Inter-ether. The name is Dave Graham and I’m an avid storage fan.  Coming from a background of business analytics, social psychology (more on that later), and IT consultancy, I’m glad to finally be “at home” with an employer who challenges my concepts of “good” storage and pushes me to become the best I can be.  Whew!  That was an exceptionally long-winded intro but, captures who I am.  Moving forward, I hope to be able to indulge you in two of my distinct passions:  high-speed, low latency interconnects for storage (Hypertransport/LDT, Infiniband, PCIe, et al.) and *drumroll please* CAS (especially as it relates to EMC Centera.  I welcome all feedback as I pursue these passions and, hopefully, along the way, we can have serious dialogue.

Cheers,

Dave

Technorati Tags: , , , ,

Powered by ScribeFire.