In Part 3a, we discussed the possibility of a purpose-driven Compute Node based on the Torrenza initiative for the Future Storage system. This expansion node made use of Hypertransport as a “glue” between the base storage compute node and the expansion node (of computation or I/O flavours) that could be added. The advantages of that topology were simple: hot add support for additional processing power, additional I/O bandwidth within the system, and additional computing power for the array OS (which we’ll cover in a later article). In this overview, we’ll take a look at another variation on an expansion node: an I/O expansion node that will add additional front-end ports and/or functionality to the base system. We will be referencing the diagram below. (Apologies in advance for the image shearing off in the lower right hand corner).
There are two different approaches that I’ll be taking a look at as part of this I/O expansion model: one approach is southbridge oriented with an additional southbridge providing additional PCIe lanes; the other is looking at an I/O expansion model based on integrated ASICs and fixed optical ports that can provide connectivity to Fibre Channel, FCoE, and IP technologies. We’ll discuss both approaches separately and then dive into what each would look like practically (another diagram, I’m afraid).
To begin, we’ll cover the SouthBridge model of I/O expansion (left side of the diagram). In this particular model, the expansion module includes another southbridge device to cover the additional PCIe lanes required for either x8 or x4 slots. The SouthBridge would be connected to the primary southbridge on the base node over the HT link. Currently, many commercial systems are using this type of topology (using either PCIe between discrete logic chips or another type of communication fabric like HT). The southbridge(s), then, would provide discrete connectors to their own interfaces as well as maintaining system level consistency for I/O. The advantage of this particular implementation would be the ability to utilize specific connectivity types (FCoE, FC, IB, IP, iSCSI) on the same type of pluggable cards as the base node (similar to Ultraflex on EMC Clariion CX4s).
The second variation on I/O expansion takes a different approach by integrating converged ASICs (cASIC) to handle I/O connectivity (right side of the diagram). This particular implementation has some obvious limitations from the start, namely, fixed port count, and limited fabric/topology support (IB is noticably lacking). Additionally, the design would require more development and implementation work in order to support a passable physical implementation. Limitations notwithstanding, there are some interesting integration points to look at. First, the use of SFP+ or XFP pluggables to support optical or electrical physical connectivity types assuages most of the fabric connectivity issues in addition to providing some level of forward-looking interoperability. Secondly, the use of converged ASICs allows for multi-protocol encapsulation, encode/decode support that would support the aforementioned pluggables without having to retool or replace a PCIe module. I’m assuming that some sort of protocol bit would need to be included in the SFP/XFP pluggables to enable the cASICs to “flip” from one protocol to another. I believe that Qlogic and Emulex (and certainly Brocade/LSI) are implementing similar types of logic into their converged ASICs on their FCoE HBAs, so the next logical step would be said integration into a larger system.
As always, I’m interested in your feedback and understanding on this (or any) subject I’ve written about so far. We learn together and this particular posting is trying to dig deeper, probably past the point of my intelligence. We’ll see…
Part 4 is still up for grabs as to what it will cover, but, most assuredly, it’ll probably be something to do with the Operating System for the FSS. Stay Tuned!!!!!
Related articles by Zemanta
- NextIO grabs $19m for virtualized PCI Express extravaganza
- The five biggest storage trends
- QLogic hypes ‘network consolidation’ with FCoE
- Hardware vendors primed for FCoE love-in
- SNWSpotlight: 8G FC and FCoE, Solid State Storage
- CNBC hopes to fly Fibre Channel on its Ethernet super highway
- Cisco’s Nexus forms core of data center drive
- New EMC Clariion line adds solid-state option
- Brocade releases second wave of HBAs
- Virtualized PCIe switch
- Vendors hype new Fibre Channel over Ethernet tech
- Brocade unfurls FCoE roadmap
- Brocade deal to help drive data center transition
- Cisco gives virtual servers virtual networking