In Part 3a, we discussed the possibility of a purpose-driven Compute Node based on the Torrenza initiative for the Future Storage system. This expansion node made use of Hypertransport as a “glue” between the base storage compute node and the expansion node (of computation or I/O flavours) that could be added. The advantages of that topology were simple: hot add support for additional processing power, additional I/O bandwidth within the system, and additional computing power for the array OS (which we’ll cover in a later article). In this overview, we’ll take a look at another variation on an expansion node: an I/O expansion node that will add additional front-end ports and/or functionality to the base system. We will be referencing the diagram below. (Apologies in advance for the image shearing off in the lower right hand corner).
In the previous two articles on the Future Storage System (FSS), I took a general look at a basic storage system architecture (Part 1) and then went a bit deeper into some of the more interesting bits of that system from a platform standpoint (Part 2). In this article, I want to dive a bit deeper into how I envision nodes to be building blocks for additional capabilities and processing directives. I will be referencing the image below as part of this article.
So, I’ve been toying around with this in my mind for some time. Essentially, I’ve tried to understand the basic “Storage Processor” limitation of current storage systems and propose an admittedly simplistic design to get around some of the difficulties. The biggest hurdle, in my mind, is to have cache coherency, low latency memory access to other nodes in a “cluster,” and have a communications “bus” between nodes that is extensible (or at least will grow bandwidth with more devices on the signal chain). Staring at that problem, then, look at the image below.