HDD IOPS limiting factor – seek or rpm?
Any storage experts out there? Can you forward this to any you may know?
An interesting thread developed in the comments on my post about Dell’s MD3000 storage array regarding theoretical maximum random IOPS to a single HDD. I’m hoping by bringing it up to the blog level, we can get some smart people who know what they’re talking about (ie, not me) to weigh in.
I’ve always believed that for a small random write workload, the revolutions per minute (rpm) of the drive was the biggest limiting factor. I think I’ve believed this for a few reasons:
- It seems logical that the biggest “time waster” in seek time is probably rpm anyway. Even if the drive arm has found the right position on the platter, it likely has to wait some amount of time, up to a full revolution, before it can write.
- rpm is a “fixed” number, and thus easier to calculate, than seek which is more variable. So taking the easy way out, one of my favorite hobbies, seemed appropriate.
Using this theory, a 7200rpm drive can do a theoretical maximum of 120 IOPS, and a 15K drive can do 250. Note that these are fully-flushed non-cached writes to the spinning metal, with no buffering or write combining. Over the years, my own tests seem have validated this theory, and so I’ve just always believed it to be gospel.
Tao Shen, though, commented that my assumption is wrong, and that seek time is the limiting factor that matters, not rpm, and that faster drives can deliver more IOPS than my rpm math. He posits that a 15K drive with a 2ms seek time can do 500 IOPS. Now, he may have access to better drives than I do, since I think our fastest are 3.5ms (best case scenario), not 2ms. That’s what the latest-and-greatest Seagate Cheetah 15K.6 drives seem to do, too.
So which is it? Am I totally smoking crack? Is he? Or is the truth that seek time and rpm are so intimately tied together that separating them is impossible?
How does one calculate theoretical maximum IOPS?