Future Storage Systems: A pause in workflow

October 17, 2008

Since I started this article series, I’ve had the awesome opportunity to have my ideas (well, some of the early articles at least) reviewed by person(s) who deal with the actual infrastructure of storage systems day in, day out.  The benefit of such peer review is that you get to learn at the symbolic “feet” of the masters and discover flaws, omissions, and understated features that need to be understood and incorporated.  This post is dedicated to some of those discussions and, where applicable, my understanding of how the FSS either incorporates or misses the boat.

Read the rest of this entry »


Future Storage Systems: Part 3a – Node Expansion Overview

October 9, 2008

In the previous two articles on the Future Storage System (FSS), I took a general look at a basic storage system architecture (Part 1) and then went a bit deeper into some of the more interesting bits of that system from a platform standpoint (Part 2).  In this article, I want to dive a bit deeper into how I envision nodes to be building blocks for additional capabilities and processing directives.  I will be referencing the image below as part of this article.

Hypertransport Node Expansion (detailed)

Hypertransport Node Expansion (detailed)

Read the rest of this entry »


Future Storage Systems: Part 2 – Detailed Node View

October 8, 2008

So, in my article yesterday, I gave a global view of a very simple storage system for the future. Since I LOVE this type of conjecture and theoretics (is that a word?), I decided to take this a step further and flesh out some of the other intricacies of the design.  Check out the image below and then click through to read the rest.

Fleshing out the Hypertransport Storage System

Fleshing out the Hypertransport Storage System

Read the rest of this entry »


Something Cool! (AMD + Tyan content)

July 30, 2008
Official logo of the 2008 Summer Olympic GamesImage via Wikipedia

One of the things I do in my spare time is develop rendering systems for a client in another country.  The cool thing about this is that I’m able to use cutting-edge AMD processors and platforms from Tyan and nVidia in order to accomplish these goals.

I just found out that one of my rendering systems was used to process the following Coca-Cola commercial for the 2008 Summer Olympics!!!  There is a little special trick to this system, however, that warrants a closer look. Read the rest of this entry »


Piggy-back concepts of “Greening” the datacenter

March 21, 2008

I’ve had a LOT of fun, lately, reading Mark Lewis’ blog (found here) as he delves into the green data center concepts. To rehash some of what has already been talked about to “green” your data center:

a.) Tier your storage. Higher speed spindles, by nature, consume more power. (Compare the specs for the Seagate Barracuda ES.2 Enterprise SATA drive to those of the Seagate Cheetah 15K.5 FC/SAS drives). By moving your data from higher speed spindles to lower speed spindles based on usage/access patterns within a larger system policy framework, you can keep power consumption low overall. Better yet, archive it off to a Centera and remove the need for tiering within the array to begin with. 😉
b.) Virtualize, Virtualize, Virtualize. Sure, it’s the “trendy” thing to do these days but, with the ability to collapse 30:1 (physical to virtual) in some cases, simply investing in VMWare (of course) will cut down on your power footprint and requirements. From the host side, using devices like Tyan’s EXCELLENT Transport GT28 (B2935) with AMD’s quad core Opteron processors allow for rack dense ESX clusters to be created that can scale to (get ready for it): 160 physical sockets/640 cores per 40U rack and 320 Gigabit Ethernet ports. I also forgot to mention that within these 1Us, you can install low profile 2port Qlogic QLE2462 4GB/s fibre cards to allow for multi-protocol attached storage to be used. *hint, hint* I think this would be a GREAT platform for the next EMC Celerra. 😉
c.) Use different storage media. By “different storage media,” I am referring to the availability of SLC/MLC flash drives and the pervasive use of 2.5″ fibre/SAS drives within the data center. I’ve already waxed eloquent before on the merits of using 2.5″ drives (lower power consumption, less moving parts, typically faster access times than comparable 3.5″ drives, etc.) and I’m anxiously waiting to see if EMC will adopt these drives for their arrays. With 2.5″ drives coming close in platter densities (500gb 2.5″ SATA drives are already available in the market), I think there is less of a reason to continue to use 3.5″ drives for nearline storage. Flash, on the other hand, while available in smaller quantities, takes the speed and power equation to a whole different level. I’ll let the Storage Anarchist explain the details:

“As you’ve probably read by now, the STEC ZeusIOPS drives themselves are in fact optimized for random AND sequential I/O patterns, unlike the lower cost flash drives aimed at the laptop market. They use a generously sized SDRAM cache to improve sequential read performance and to delay and coalesce writes. They implement a massively parallel internal infrastructure that simultaneously reads (or writes) a small
amount of data from a large number of Flash chips concurrently to overcome the inherent Flash latencies. Every write is remapped to a different bank of Flash as part of the wear leveling, and they employ a few other tricks that I’ve been told I can’t disclose to maximize write performance. They employ multi-bit EDC (Error Detection) and ECC (Error Correction) and bad-block remapping into reserved capacity of the drives. And yes, they have sufficient internal backup power to destage pending writes (and the mapping tables) to persistent storage
in the event of a total power failure.”

In any case, these are some quick notes from me this AM. Definitely am looking forward to delving into the Tyan GT28/AMD Quad Core stuff in the next few days.

Happy friday!

Technorati Tags: , , , , , , , , , , , , , , , , , , , ,