Future Storage Systems: A pause in workflow

October 17, 2008

Since I started this article series, I’ve had the awesome opportunity to have my ideas (well, some of the early articles at least) reviewed by person(s) who deal with the actual infrastructure of storage systems day in, day out.  The benefit of such peer review is that you get to learn at the symbolic “feet” of the masters and discover flaws, omissions, and understated features that need to be understood and incorporated.  This post is dedicated to some of those discussions and, where applicable, my understanding of how the FSS either incorporates or misses the boat.

Read the rest of this entry »


Future Storage Systems: Part 4: Operating System – Conceptual Overview

October 13, 2008

In the previous Future Storage System articles, we’ve covered the basic hardware foundation for what I envision to be a powerful future-oriented storage solution for the commercial midrange.  However, as you’re probably aware, hardware is meaningless without software to provide the operational capabilities that are needed to mange information.  In this article, I will focus on a general design for an extensible software layer (an OS) that will provide future-oriented capability expansion as well as robust analytics, capabilities, and integration with business continuity principles.  As always, please reference the diagram below.

Future Storage System - Operating System - Conceptual

Future Storage System - Operating System - Conceptual

Read the rest of this entry »


Why wouldn’t the following work? (Future Storage System: Part 1)

October 7, 2008

So, I’ve been toying around with this in my mind for some time.  Essentially, I’ve tried to understand the basic “Storage Processor” limitation of current storage systems and propose an admittedly simplistic design to get around some of the difficulties.  The biggest hurdle, in my mind, is to have cache coherency, low latency memory access to other nodes in a “cluster,” and have a communications “bus” between nodes that is extensible (or at least will grow bandwidth with more devices on the signal chain).  Staring at that problem, then, look at the image below.

A case for Hypertransport connected nodes...

A case for Hypertransport connected nodes...

Read the rest of this entry »


Search Term Tuesday – May 26th Edition

May 27, 2008

This is the second edition of the same post. Evidently, WordPress doesn’t like it when I fat-finger in Firefox 3.0 Beta 5. Grrrrr…..

So, what is “Search Term Tuesday” (or any other day of the week, even)? The principle of it is this: grab some of the focused searches out there that land on this site (i.e Flickerdown) and attempt to respond to them with more data. Deal? Let’s begin, then.

Read the rest of this entry »


It’s not about the money…

May 5, 2008

…or is it?

Sometimes, I think we spend too much time worried about how much X solution is going to cost without looking at the big picture. I’ll be quite honest with you: a $25,000.00 array IS expensive to SMB, regardless if a Commercial or Enterprise account rep thinks otherwise. “What’s the value that the $25,000 array is going to bring you? Is it truly worth that cost? etc. etc.” have to be the questions swirling through the minds of any competent IT Director/Admin, etc. After all, the last thing you need in your Data Center is deadweight.

Read the rest of this entry »


More on the SAS vs. Fibre debate

March 25, 2008

Connectivity Reliability

At some point, I had typed in a bit about the physical interfaces present on both the SAS and Fibre drives. I appears that I ran roughshod over that particular point which, upon thinking about it, is a very important dimension of drive reliability.

As noted previously, SAS drives use an amended SATA data+power connectivity schema. Instead of a notch between the data and power connections as present on SATA drives, SAS drives simply “bridge” that gap with an extra helping of plastic. This not only turns the somewhat flimsy SATA connectors into a more robust solution, it also requires that the host connector support that bridging. Interesting note here is that the SAS host connector supports SATA drives but SATA host connectors will not support SAS. This is somewhat assuaged by various host implementations (i.e. using a SAS connector on a backplane with discrete SATA data connectivity from the backplane to the mainboard) but generally, this is the rule. The SAS drives feature a male connectivity block which is mated to a female SAS connectivity block on the host system. Pretty basic stuff.

Fibre drives, on the other hand, use a SCA (single connector edge) medium that is again male on the drive side and female on the host side. Definitely more simplistic in design and implementation (and is featured within all current EMC arrays) and honestly, when push comes to shove, something I would trust inherently more with protection. The same idea is present with SCA80 Ultra320 SCSI drives as well. The fitment here is definitely more secure with less design stress placed on the physical connector (and thusly the PCB itself) than with the SAS solution.

There are always caveats with distinct designs, however, and I’d like to highlight some below.
a.) The SAS data+power connector is inherently MORE secure than the standard SATA interface. Truth be told, I’ve broken SATA data connectors. It’s really not hard since the data connection is a discrete “tab” from the power interface (which I’ve broken as well). The addition of the plastic “bridge” between data and power connections on SAS drives promotes a stronger bond between the connector (whether that be SFF or backplane based) and the drive itself. It also keeps folks from mistakenly connecting SAS drives to SATA ports. 😉
b.) The SAS interface is still prone to breakage as compared to SCA40/80 connections. There’s a reason why we do a conversion within our drive caddies from SATA to Fibre (outside of the obvious protocol translation and sniffer obligations): it’s more secure. The mating mechanism within the SCA interface provides no single point of stress on the connector as there is a nesting process that takes place. Not so with the SAS interface: you have a significant protrusion into the caddy area that, if improperly aligned, can cause damage. If you misalign the SCA interface, you can’t make the connection and there’s no protrusion difficulties.

Note: The good news in all of this (at least from my perspective @ EMC) is that we’re not going to allow you to screw this connectivity up. 😉 We mount the drives in our carriers, put them in the array and, well, we’ve got you covered. 😉

In any case, this is really for further clarification from yesterday’s post. Hopefully that will give a little more food for thought.

Technorati Tags: , , , , , , , , , , , , , , , , ,


SAS vs. Fibre, Seagate’s SSD dilemna, and Sun’s “Freakin’ Laser Beams”

March 24, 2008

SAS vs. Fibre

One thing I hear about constantly (within the hallowed halls of EMC and elsewhere), is the general “inferiority” of SAS drives vs. Fibre. This usually comes complete with a somewhat stale argument that because SAS is a natural extension of SATA, it is therefore a “consumer” drive and not “good enough” for the Commercial or Enterprise disk space.

Really?

What most people fail to realize is the following:
a.) The platters, drive motors, heads, etc. are the same. If people actually spent the time looking into these products (vs. cutting at them with a wide swath of generalized foolishness), they’d actually see that the same mechanical “bits” make up both the “enterprise” class fibre and SAS drives. Looking at the Seagate Cheetah 15k.5 drive line, we see that they’re offered in SCA-40 (Fibre), SAS, SCA-80 (u320), and 68pin interfaces. The spec sheet shows that outside of differing transfer rates (and, lower power draws at load/idle than Fibre), both the SCSI and SAS drives are the same.
b.) The primary differentiators are the PCBs, ASICS, Physical Connectors to the “host” system, and Transfer Rates. Flipping the drives over, you’ll obviously note the differences in PCBs, onboard ASICs, and physical connectors. That’s a wash as it has little to nothing to do with reliability. So, what you’re left with is the transfer rate conundrum. Honestly, given how particularly bad customers are at actually filling a 4 gigabit per second pipe with data (esp. in the commercial side of the house), a 1 gigabit per second difference (roughly 100mb/s) is minimal. Oh, for the record, our STEC SSDs will only have a 2gb/s connection to the Symmetrix, last I heard. 😉

I think those two points about cover it. 😉 MTBF, etc. are the exact same, btw, so, don’t expect any differences from hardware longevity.

Seagate and SSDs: WE SUE YOU!

Engadget is one of my favourite reads during the day and consequently, I need to blog about articles located there more often. That being said, I almost fell out of my seat this morning when I read one of the latest postings: “Seagate warns it might sue SSD makers for patent infringement.” Yippee. In my opinion, this is more of the same “sue happy” nitwitery (is that a word?) that happens every single time Apple decides to release a new “product.” Some Rip van Winkle patent hound comes out of years of slumber and states “I patented the EXACT same technology using specious language and vague intimations of what I thought could work” much to the chagrin of everyone around. Now, in the case of Seagate and Western Digital, I believe that they’re just looking to diversify their holdings in the emerging SSD market. Remember, at the price per gigabyte/terabyte mark, spinning disk is still the king and will be for quite some time. However, in terms of power draw and raw IOPs, you can’t beat them. In any case, file this whole article under the “we want in (and the money wouldn’t be a bad thing either)” category. 😉

EDIT: (4/10/08 @ 1103pm EST) New entry added above on the Seagate vs. STEC lawsuit

Sun: We’re going optical (with LASER BEAMS!!!!)

Next on the hitlist is the re-emergence of Optical interconnects between processors as noted by Sun (and it’s recent DARPA grant). Great news for Sun, really, but IBM has already been doing this for some time. Optical interconnects ARE the wave of the future for processor interconnects, etc. especially as quantum computing (and it’s massive data loads) are concerned. Definitely something to pay attention to. Who knows? Maybe EMC will use optical transmission in its Symmetrix line between the blades. 😉 A boy can hope.

That’s all for now.

Peace,

Dave

Technorati Tags: , , , , , , , , , , , , , , , , ,