Since I started this article series, I’ve had the awesome opportunity to have my ideas (well, some of the early articles at least) reviewed by person(s) who deal with the actual infrastructure of storage systems day in, day out. The benefit of such peer review is that you get to learn at the symbolic “feet” of the masters and discover flaws, omissions, and understated features that need to be understood and incorporated. This post is dedicated to some of those discussions and, where applicable, my understanding of how the FSS either incorporates or misses the boat.
In the previous two articles on the Future Storage System (FSS), I took a general look at a basic storage system architecture (Part 1) and then went a bit deeper into some of the more interesting bits of that system from a platform standpoint (Part 2). In this article, I want to dive a bit deeper into how I envision nodes to be building blocks for additional capabilities and processing directives. I will be referencing the image below as part of this article.
So, in my article yesterday, I gave a global view of a very simple storage system for the future. Since I LOVE this type of conjecture and theoretics (is that a word?), I decided to take this a step further and flesh out some of the other intricacies of the design. Check out the image below and then click through to read the rest.
Just to add to the ever-growing information data base on the excellent Tyan S5397.
Remember, the Intel Xeon processors use a different set of prefetch schedulers than standard “consumer” level parts, so, this can also affect workload.