Disk Performance Optimisation - Next Stage

Friday 29th April, 2005
Following my RAM drive performance articles here and here, I've been diving some more into the world of disk I/O optimisation.

The RAM drive stuff was all good but, it seems, only on slower disk systems. On a production server with ULTRA-320 drives on a caching controller I really can't see any significant performance benefit even at the busiest times.

So, if you've got an older or cheaper disk system then go ahead...use the RAM drive solution. I can see this being ideal for maybe an archive server which has Terabytres of data running from slower, cheaper IDE drives.

For the rest of you, what else can be done? Well, by pure co-incidence I came across this entry on Andrew Pollack's blog relating to NTFS disk compression.

As with the RAM drives, the theory was good. Disk I/O is much more of a bottleneck than CPU on a hyper-threader 4-way server so adding on OS level disk compression isn't really going to cause overhead. Assuming 50% compression you're in a position where writing a 10MB attachment to a users mailfile will only generate a 5MB disk write, a huge benefit.

I decided to test this but I've discovered something that's a slight dissappointment. Based on IBM's server optimisation recommendations, I use an NTFS cluster size of 16k when formatting my disk partitions. This cluster size doesn't support NTFS compression so, assuming your servers optimised, the compression route is a non-starter.

If you used "default" cluster size though, you may be in a position to experiment further with this.

And don't forget to try this out on any laptop with slow disks. As Chris Linfoot notes on Andrew's blog, this was a huge benefit to him when working locally.

Comments/Trackbacks [1]