Since I started this article series, I’ve had the awesome opportunity to have my ideas (well, some of the early articles at least) reviewed by person(s) who deal with the actual infrastructure of storage systems day in, day out. The benefit of such peer review is that you get to learn at the symbolic “feet” of the masters and discover flaws, omissions, and understated features that need to be understood and incorporated. This post is dedicated to some of those discussions and, where applicable, my understanding of how the FSS either incorporates or misses the boat.
Part of the beauty of ESX 3.x from a hardware support standpoint was the addition of SATA as a viable install media for the hypervisor and service console. However, opening up support for SATA also included a few hiccups along the way, most related to the SATA controllers officially supported by VMware. For folks like myself who spent a lot of time with AMD-based platforms, the only real choices for SATA controllers (onboard the motherboard, not discrete) were offerings from Broadcom and nVidia. This post will highlight how to configure your ESX 3.x host to use nVidia SATA controllers.
Note: This information is available within the VMware user community as well. I am indebted to the person(s) in that community who provided this information, albeit in a slightly less “visual” way.
It’s been awhile since I last did a review of what people are searching for (July 30th was the last time…wow) so, let’s see what’s new.
Search Term #1: EMC NX4
Since I work in the SMB/Commercial space as a TC, I routintely am exposed to mixed fabric environments. With the advent of iSCSI, we’ve seen a proportional shift towards iSCSI as a reduced-cost block storage fabric. Legacy (2Gb/s) fibre still has presence in specific markets but the uptake of 4Gb/s fibre has been slowing down. With FCoE being announced as the next logical evolution of converged fabrics and 8Gb/s FC and 10G iSCSI working their way to availability, does FCoE make sense for SMB/Commercial markets?
In the previous two articles on the Future Storage System (FSS), I took a general look at a basic storage system architecture (Part 1) and then went a bit deeper into some of the more interesting bits of that system from a platform standpoint (Part 2). In this article, I want to dive a bit deeper into how I envision nodes to be building blocks for additional capabilities and processing directives. I will be referencing the image below as part of this article.