Since I started this article series, I’ve had the awesome opportunity to have my ideas (well, some of the early articles at least) reviewed by person(s) who deal with the actual infrastructure of storage systems day in, day out. The benefit of such peer review is that you get to learn at the symbolic “feet” of the masters and discover flaws, omissions, and understated features that need to be understood and incorporated. This post is dedicated to some of those discussions and, where applicable, my understanding of how the FSS either incorporates or misses the boat.
Part of the beauty of ESX 3.x from a hardware support standpoint was the addition of SATA as a viable install media for the hypervisor and service console. However, opening up support for SATA also included a few hiccups along the way, most related to the SATA controllers officially supported by VMware. For folks like myself who spent a lot of time with AMD-based platforms, the only real choices for SATA controllers (onboard the motherboard, not discrete) were offerings from Broadcom and nVidia. This post will highlight how to configure your ESX 3.x host to use nVidia SATA controllers.
Note: This information is available within the VMware user community as well. I am indebted to the person(s) in that community who provided this information, albeit in a slightly less “visual” way.
It’s been awhile since I last did a review of what people are searching for (July 30th was the last time…wow) so, let’s see what’s new.
Search Term #1: EMC NX4
Since I work in the SMB/Commercial space as a TC, I routintely am exposed to mixed fabric environments. With the advent of iSCSI, we’ve seen a proportional shift towards iSCSI as a reduced-cost block storage fabric. Legacy (2Gb/s) fibre still has presence in specific markets but the uptake of 4Gb/s fibre has been slowing down. With FCoE being announced as the next logical evolution of converged fabrics and 8Gb/s FC and 10G iSCSI working their way to availability, does FCoE make sense for SMB/Commercial markets?
So, in my article yesterday, I gave a global view of a very simple storage system for the future. Since I LOVE this type of conjecture and theoretics (is that a word?), I decided to take this a step further and flesh out some of the other intricacies of the design. Check out the image below and then click through to read the rest.
So, I’ve been toying around with this in my mind for some time. Essentially, I’ve tried to understand the basic “Storage Processor” limitation of current storage systems and propose an admittedly simplistic design to get around some of the difficulties. The biggest hurdle, in my mind, is to have cache coherency, low latency memory access to other nodes in a “cluster,” and have a communications “bus” between nodes that is extensible (or at least will grow bandwidth with more devices on the signal chain). Staring at that problem, then, look at the image below.
Is it just me or have we been waiting for this for a while? 😉 Today is officially the EMC Clariion CX4 public GA (general availability) date. Good news: they’re shipping TODAY! No paper launches, folks…this is immediate availability. The other good news: you get to do more with your storage; faster, cheaper, stronger, more flexible, etc. Let me rip through some highlights for you:
a.) Cache and SP Processor increases. Across the board, processor “speeds” and cache sizes have been increased. Now, this may appear somewhat odd in that the CX4-120, for example, only has two dual core 1.2ghz processors, but, when you consider that the onboard L2 cache is greater in size (and Woodcrest processors were HANDILY more powerful than the older Nocona Xeons), it actually has more innate processing power than the previous generation processors. Cache sizes, when coupled with the 64 bit FLARE OS for the array, allow for better allocation and utilization within the array.