Actually writes faster then reads are typical fora Copy on Write FC (or Write 
Anywhere). I usually describe it like this. 

CoW in ZFS works like when you come home after a long day and you ust want to 
go to bed. You take of one pice of clothing after another and drop it on the 
floor just where you are - this is very fast (and it actually is copy on write 
with block allocation policy of "closest"). 

Then the next day when you have to get to work (in this example assuming that 
you wear the same underwear again - remember not supported ! :) - you have to 
pick up all the cloths one after another and you have to move across all the 
room to get dressed. This takes time and it is the same for reads. 

So in CoW it is usual that writes are fast then reads (especially for 
RaidZ/RaidZ2, where each vdev can be viewes as one disk). For 100% synchronous 
writes (wcd=true), you should see the same write and read performance. 

So for your setup I assume: 

4 x 2 disk mirror with Nearline SATA:

Write (sync, wcd=true) = 4 x 80 IOPS = 320 IOPS x 8 KB Recordsize = 2,6 MB/Sec 
if you see more thats ZFS optimizations already. If you see less - make sure 
you have proper partition alignment (otherwise 1 write can become 2).

Read = 8 x 100 IOPS (some more IOPS because of head optimization and elevator) 
= 800 IOPS x 8k = 6,4 MB /sec from disk. Same problem with partiton alignment.

For 128k block size ? 

Write: 320 x 128k = 42 MB/sec
Read: 102 MB/sec 

ZFS needs caching (L2ARC,ZIL etc.), otherwise it is slow  - just as any other 
disk system for random I/O. For sequencial I/O ZFS is not optimimal because of 
CoW. Also with iSCSI you have more fragmentation becase of the small block 
updates. 

So how to tune ? 

1) Use ZIL (this will make your writes more sequencial, so also optimize the 
reads)
2) Use L2ARC
3) Make sure partition aligment is ok
4) try to disable read-ahead on the client (otherwise you case eben more random 
I/O)
5) use larger block size (128k) to ave some kind of implicit read-ahead  
(except for DB workloads)

Regards, 
Robert Heinzmann
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to