It’s no secret that Microsoft’s SQL database environment is immensely popular in the market. To that end, EMC and Microsoft have developed “Best Practices” for laying out SQL databases, logs, quorums (for clustering), and TempDb on our arrays. For this entry, I thought I’d provide you with a quick overview of two different disk layouts.
Layout 1: RAID 1/0
Obviously RAID 1/0 (or RAID 10) is geared towards the pinnacle of read/write performance. While this may be true, the disk “cost” is what typically moves people towards a more balanced disk cost in RAID 5. Typically, Microsoft will recommend segmenting off your Quorum (for clusters), TempDB, and Log files onto separate spindles to isolate the I/O of your database. Further, expansion, while typically limited, can be rectified by using striped MetaLUNs within a Clariion or Celerra system.
Layout 2: RAID 5
RAID 5 is a more balanced approach to a SQL layout as it minimizes disk cost at the expense of calculating a parity bit across all the disks of its member set. Typically, we recommend either 4+1 or 8+1 RAID 5 sets based on rebuild and performance times. Again, the same arrangement is expected from Quorum, TempDB, and Logs as noted above in the RAID 1/0 section. Striped MetaLUNs can again be used for expanding the inital RAID group set.
Notes on Capacity/Performance
I get asked a lot about expanding a RAID group on the fly. There are obviously several ways in the EMC world of accomplishing this: LUN migration to a new RAID group, RAID group expansion with restriping, and MetaLUNs. LUN migrations are typically viewed with a certain level of disdain since it requires adding in additional disk capacity to simply move and then destroy the source LUNs. Realistically, it adds management overhead to the process of performancing your array. The second option is to dynamically expand and restripe an existing RAID set. This is typically seen on a larger scale in Equalogic boxes or other OneFS-type storage arrays. Again, the principle is pretty simple: simply add additional disk within the array and expand the RAID group to include those disks. The data LUN is then restriped across the additional disk within the array and…voila! More capacity and more performance. Obviously, there are restrictions when adding disks into different RAID types: a minimum of 2 disks per RAID 1/0 group and a minimum of 1 disk to add to RAID 5 groups. The third and final option is probably one of the most desirable: using striped MetaLUNs to expand capacity and add performance. By utilizing MetaLUNs for managing your capacity and performance, you can enable more spindle performance with less write penalties. Think about it: which is harder to do, calculate parity across 9 disks (an 8+1 RAID 5 group) or parity for 5 disks (4+1 x 2 in a MetaLUN)? Downsides, as always, exist with MetaLUNs in that you have to expand using the same size RAID groups on either side.
Anyhow, these are my thoughts for a quiet Monday night!