Storage Configuration: Know Your Workloads for IOPS or Throughput
Posted by Philip Elder on 09 June 2014 11:30 AM
Here we have a practical example of how devastating a poorly configured disk subsystem can be.
The above was one of the first Iometer test runs we did on our Storage Spaces setup. The above 45K IOPS was running on 17, yes seventeen, 100GB SSD400S.a HGST SAS SSDs.
Obviously the configuration was just whacked.
Imagine the surprise and disappointment one would have supplying a $100K SAN and ending up with the above results after the unit was put into production and the client was complaining that things were not happening anywhere near as fast as expected.
What we are discovering is that tuning a storage subsystem is an art.
There are so many factors that one needs to keep in mind as far as the types of workloads that will be running on the disk subsystem right through to the hardware driving it all.
After running a large number of tests using Iometer, and with some significant input from fellow MVP Tim Barrett, we are beginning to gain some insight into how to configure things for the given workload.
This is a snip taken of a Simple Storage Space utilizing _just two_ 100GB HGST SSD400S.a SAS SSDs (same disks as above):
Note how we are now running at 56K IOPS.
Microsoft has an awesome, in-depth, document on setting things up for Storage Spaces performance here:
We suggest firing the above article into OneNote for later reference as it will prove invaluable in figuring out the basics for configuring a Storage Spaces disk subsystem. It can actually provide a good frame of reference for storage performance in general.
Our goal for our Proof-of-Concept testing that we are doing was around 1M IOPS.
Given what we are seeing so far we will hopefully end up running at about 650K to 750K IOPS! That’s not too shabby for our “commodity hardware” setup.
Chef de partie in the SMBKitchen ASP Project