> From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
> 
> The numbers I've heard say the number of iops for a raidzn volume should
> be about the number of iops for the slowest drive in the set. While this might
> sound like a good base point, I tend to disagree. I've been doing some testing
> on some raidz2 volumes with various sizes and similar various amount of
> VDEVs. It seems, with iozone, the number of iops are rather high per drive,
> up to 250 for these 7k2 drives, even with an 8-drive RAIDz2 VDEV. The testing
> has not utilized a high number of theads (yet), but still, it looks like for 
> most
> systems, RAIDzN performance should be quite decent.

Bear a few things in mind:

iops is not iops.
When you perform lots of writes, ZFS is going to accelerate that by aggregating 
them into sequential disk blocks, and therefore greatly exceed the true random 
iops limits of the drive.  For a single disk, using iozone, I measured around 
550 to 600 writes on 15krpm drives.  You have to compare apples to apples.  Not 
the brand.  I mean don't compare miles to kilometers.

raidz - seek limited?  or bandwidth limited?
If you're doing a single thread, which will perform a small random read or 
write, and blocking-wait until that's done before issuing the next command...  
Then you're going to get the worst performance of any one disk in the set.  But 
if you're allowing the system to queue up commands...  Then...  While 6 disks 
are idly sitting around waiting for disk 7, those other 6 disks can already 
begin on the next task.  In either case, your performance is going to be 
limited by either the worst case, or the average case of a single drive.  And 
thanks to ZFS write acceleration, you'll see approx 10x higher write iops.  But 
there's nothing you can do to accelerate random reads, aside from command 
queueing.  

If you're performing large sequential writes/reads, then you're going to get 
stripe-like performance, of many disks all working simultaneously as a team.  

Long story short, the performance you measure will vary enormously by the type 
of workload you're generating.  For some workloads, absolutely, you WILL see 
the performance of just a single disk.  For other workloads, you'll scale right 
up to the maximum number of disks...  Getting N-times the performance of a 
single disk.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to