Future Storage Systems: Part 4: Operating System – Conceptual Overview

October 13, 2008

In the previous Future Storage System articles, we’ve covered the basic hardware foundation for what I envision to be a powerful future-oriented storage solution for the commercial midrange.  However, as you’re probably aware, hardware is meaningless without software to provide the operational capabilities that are needed to mange information.  In this article, I will focus on a general design for an extensible software layer (an OS) that will provide future-oriented capability expansion as well as robust analytics, capabilities, and integration with business continuity principles.  As always, please reference the diagram below.

Future Storage System - Operating System - Conceptual

Future Storage System - Operating System - Conceptual

Read the rest of this entry »

Advertisements

Off to Orlando…

September 30, 2008

Well, I’ll be in Orlando, FL for the next few days soaking up some good ‘ol EMC goodness (that’d be product, not sun 😉 ).  I should have some really interesting things to report on this month including more information on the CMIS “platform” (announced last month by EMC, IBM, and Microsoft) as well as some newer technology that’s coming in the the following months.  Also, will have a review of the excellent Qlogic 5802V 8gb/s fibre switch available online in a bit as well.

Stay tuned!  Things are going to be interesting.

cheers,

Dave

Reblog this post [with Zemanta]

Search Term Tuesday – June 3rd Edition

June 3, 2008

Note: I’m trying to tighten up the layout of content on the landing page, so, I’ll be using excerpts more and more.

Continuing from last week, this edition of Search Term Tuesday will tackle the most important searches (or at least highest ranked statistically) that landed you at this blog.

Search Term #1: cx3-10 create raid group

Read the rest of this entry »


Kudos to IBM: “Racetrack Memory”

April 12, 2008

Was tripping through Google News this afternoon and happened upon a story called: New Storage Solutions From IBM @ Efluxmedia. The crux of this article discusses a new memory technology from IBM nicknamed “racetrack memory.” What is “racetrack memory” you say?

Racetrack memory, as I understand it, works by “using tiny magnetic boundaries to store data.” (Eflux Media). It evidently allows for storage solutions 100 times bigger than their NAND/NOR/SLC/MLC bretheren. The focus of the technology relies on “spintronics” which, according to much smarter people than I, relies on “the storage of bits generated by the magnetic spin of electrons vs. the differential of their charges.” (Eflux Media). It “has no wear-out mechanism and so can be rewritten endlessly without any wear and tear.” (CRN) This flies right in the face of SSDs and their wear-leveling technologies (i.e. Symmetrix SSDs) and offers a cheaper, longer-living solution to data storage.

If you want to learn more, you can bounce over to where there’s a YouTube video from IBM describing the technology.

that’s all for today.

Dave

Quick Edit: this technology is positioned about 6-7 years out. Not to beat on a dead horse or anything, but….I’m pretty sure EMC will be in this ballgame as well. 😉 (4/13/08 @ 10:40 AM EST)

Technorati Tags: , , , , ,


Thoughts #1

February 14, 2008

Being a slave to technology really isn’t as bad as everyone thinks. For example, I’m currently sitting in a car dealership, waiting for my oil to be changed and, well, banging away at the tiny keys on this Blackberry 8800. To that end, I’m able to take some time to review some of the stories that I feel have had some measure of impact in the storage world.

First off, XIV going to IBM. (linked here, here, here, and here), I never knew Moshe Yanai and honestly, the particular markets I work in don’t really necessarily benefit from the Symmetrix or it’s architecture. So, fundamentally, I could care less about IBM taking on the Symm in Enterprise (versus some of the other folks out there blowing hot air about it…) That being said, in reviewing the Nextra white papers and comments/blogs of folks who are more intelligent than I, I can see where Nextra could have trickle down impact on other discrete EMC (or competitor) products, namely nearline archive or A level Commercial accounts.

The devil is in the details though…For example, Centera has always operated on the RAIN (Redundant Array of Independent Nodes) principle and consequently the architecture that it encompasses has a very long product lifecycle. Changes can be made at the evolutionary level (i.e. Shifting from Netburst P4 processors to Sossaman Xeons and accompanying board logic) vs the revolutionary and literally, software/OS changes can cause the most impact on performance. At the end of the day, the differentiation is at the software level (and API integration, lest we forget :)) not hardware. Where Nextra seems to throw itself into the ring is in fundamental flexibility of hardware. Don’t need quite the same processing threshold as Company X but want more storage? Use less compute nodes and more storage nodes. Need to have more ingest power? Add connectivity nodes. Etc, etc. This blade or node based architecture allows for “hot” config changes when needed and appears to allow for pretty linear performance/storage scaling. Beyond the “hot” expansion (and not really having any clean insight into the software layer running on Nextra), one has to asusme that at minimum, there is a custom-clustering software package floating above the hardware. Contrasting this to Centera, then, what really is the difference?

A.) Drive count vs. Processing/connectivity count. With Nextra, you can have an array biased towards storage or connectivity. With Centera, each node upgrade that you add has the potential to be both storage AND connectivity but requires reconfig at the master cluster level.
B.) Capacity. Nextra is designed to scale to multiple PB’s by introducing 1Tb drives as the baseline storage config. Centera, while currently shipping with 750Gb drive configs, will obviously roadmap to 1Tb drives for 4Tb per node (16Tb per 4 node cluster; that’s raw storage prior to penalties for Centrastar, CPP or CPM, etc). Again, Centera is designed from a clustering standpoint so, feasibly, a multiple petabyte cluster is with reason, provided the infrastructure is there.
C.) Power. No contest really. Centera is designed to function within a very strict “green” envelope and from the hardware perspective is very “performance per watt” oriented. (Granted, I believe that they could eek more performance out of a low power Athlon64 processor while keeping within the same thermal/power guidelines…but, I digress..). Nextra, again by design, fits into a enterprise data center grade power threshhold and, consequentlu, even with using SATA drives, will have much higher power consumption and overhead. If they use spin-down on the disks, then perhaps they can achieve better ratios but if the usuage profile per customer doesn’t fit, they’ve mitigated it’s advantages.

Anyhow, I’ll probably revise this list as we move along here, but…I just wanted this to be food for thought.

cheers,

Dave