With the exodus of mass internal storage hitting even the top end of the line in the 2013 Mac Pro, a lot more people are going to start looking for external solutions for their storage needs. Many will just buy an external hard drive or two, but others like myself will start to consider larger external storage arrays. One of the best solutions for people who need 5-15GB of storage is a 4 disk RAID 5 system. As I mentioned in a previous post, I went with a Pegasus2, and set it up in a RAID 5. This brings up a lot of questions about individual RAID settings though, so I thought I would put together a primer on typical RAID settings you should care about when purchasing a Pegasus or comparable desktop RAID system.
Stripe Size Stripe size is probably the setting that has one of the biggest impacts on performance of your RAID. A lot of people will run a benchmark or two with different stripe sizes and incorrectly determine that bigger stripe sizes are faster, and use them. In reality, the best performing stripe size highly depends on your workload.
A quick diversion to RAID theory is required before we can talk about stripe sizing. With RAID 5, each drive is split up into blocks of a certain size called stripes. In a 4 disk RAID 5, 3 disks will have real data in their stripes, and the 4th disk will have parity data in it’s stripe (in reality, the parity stripes in a RAID 5 alternate between drives, so not all the parity is on the same disk). The parity stripe allows a disk to fail while still keeping your array online. You give up 25% of the space to gain a certain amount of redundancy.
When you read data from the volume, the RAID will determine which disk your data is on, read the stripe and return the requested data. This is pretty straightforward, and the impact of stripe size during reading is minimal.
However, when writing data to the disk, stripe size can make a big performance difference. Here’s what happens every time you change a file on disk:
- Your Mac sends the file to the RAID controller to write the change to the volume.
- The RAID controller reads the stripe of data off the disk where the data will reside.
- The RAID controller updates the contents of the stripe and writes it back to the disk.
- The RAID controller then reads the stripes of data in the same set from the other disks in the volume.
- The RAID controller recalculates the parity stripe.
- The parity slice is written to the final disk in the volume.
This results in 3 stripe reads, and 4 stripe writes every time you write even the smallest file to the disk. Most RAIDs will default to a 128KB stripe size, and will typically give you a stripe size range anywhere from 32KB to 1MB. In the example above, assuming a 128KB stripe size, even a change to a 2KB file will result in almost 1MB of data being read/written to the disks. If a 1MB stripe size is used instead of 128KB, then 7MB of data would be accessed on the disks just to change that same 2KB file. So as you can see, the stripe size greatly determines the amount of disk I/O required to perform even simple operations.
So why not just choose the smallest stripe size? Well, hard drives are really good at reading contiguous blocks of data quickly. If you are reading/writing large files, grouping those accesses into larger stripe sizes will greatly increase the transfer rate.
In general, if you use mostly large files (video, uncompressed audio, large images), then you want a big stripe size (512KB – 1MB). If you have mostly very small files, then you want a small stripe size (32KB – 64KB). If you have a pretty good mix between the two, then 128KB – 256KB is your best bet.
Read Ahead Cache A lot of RAID systems will give you the option of enabling a read ahead cache. Enabling this can dramatically increase your read speeds, but only in certain situations. In other situations, it can increase the load on your hard drives without any benefit.
Let’s talk about what happens in the read cache when read ahead is disabled. The read cache will only store data that you recently requested from the RAID volume. If you request the same data again, then the cache will already have that data ready for you on demand without requiring any disk I/O. Handy.
Now how is it different when read ahead caching is enabled? Well, with read ahead caching, the RAID controller will try and guess what data you’ll want to see next. It does this by reading more data off the disks than you request. So for example, if your Mac reads the first part of a bigger file, the RAID controller will read the subsequent bytes of that file into cache, assuming that you might need them soon (if you wanted to read the next part of the big file, for example).
This comes in handy in some situations. Like I mentioned earlier, hard drives are good at reading big contiguous blocks of data quickly. So if you are playing a big movie file, for instance, the RAID controller might read the entire movie into cache as soon as the first part of the file is requested. Then as you play the movie, the cache already has the data you need available. The subsequent data is not only available more quickly, but the other disks in your RAID volume are also free to handle other requests.
However, the read ahead results in wasted I/O. A lot of times, you won’t have any need for the subsequent blocks on the disk. For instance, if you are reading a small file that is entirely contained in a single stripe on the volume, there is no point in reading the next stripe. It just puts more load on the physical disks and takes more space in the cache, without any benefit.
Personally, I enable read ahead caching. It’s not always a win-win, but it can greatly speed up access times when working with bigger files (when the speed is needed most).
Write Back Cache There are two write cache modes: write through, and write back. Your choice here can have a dramatic impact on the write speed to your RAID. Here’s how each mode works.
Write Through: When writing data to the disk, the cache is not used. Instead, OS X will tell the RAID to write the data to the drive, and the RAID controller waits for the data to be completely written to the drives before letting OS X know the operation was completed successfully.
Write Back: This uses the cache when writing data to the disk. In this case, OS X tells the RAID to write a given block of data to the disk. The RAID controller saves this block quickly in the cache and tells OS X the write was successful immediately. The data is not actually written to the disks until some time later (not too much later, just as soon as the disks can seek to the right location and perform the write operation).
Enabling the write back cache is “less safe” than write through mode. The safety issue comes into play during a power outage. If the power goes out between the time that the RAID told OS X the data was written, and the time when the data is actually on the disks themselves, data corruption could take place.
More expensive RAID systems, like the Pegasus2, have a battery-backed cache. The benefit here is that if a power outage happens as described above, the battery should power the cache until the power goes back on and the RAID controller can finish writing the cache to disks. This effectively overcomes the drawback of enabling write back caching.
Another potential drawback for enabled write back caching is a performance hit to the read speed. The reason for this is that there is less cache available for reading (because some is being used for writes). The hit should be pretty minor though, and only applicable when a lot of write operations are in progress. Otherwise, the amount of data in the write back cache will be minimal.
The big advantage of using a write back cache is speed though. When write back caching is enabled, OS X doesn’t have to wait for data to be written to the disks, and can move on to other operations. This performance benefit can be substantial, and gives the RAID controller more flexibility to optimize the order of write operations to the disks based on the locations of data being written. Personally, I enable write back caching.
Wrap-up
That about covers it. Small desktop RAID systems are a nice way to get a consolidated block of storage with some a little redundancy and a lot more performance than just a stack of disks can provide. I hope this overview has helped you choose the options that are best for your desktop RAID system. In the end, there is no right answer to the settings everyone should use. Choose the settings that best fit your workload and performance/safety requirements.