On Tue, 27 Feb 2007, Jeff Davis wrote:


Given your question are you about to come back with a
case where you are not
seeing this?


As a follow-up, I tested this on UFS and ZFS. UFS does very poorly: the I/O 
rate drops off quickly when you add processes while reading the same blocks 
from the same file at the same time. I don't know why this is, and it would be 
helpful if someone explained it to me.

UFS readahead isn't MT-aware - it starts trashing when multiple threads perform reads of the same blocks. UFS readahead only works if it's a single thread per file, as the readahead state, i_nextr, is per-inode (and not a per-thread) state. Multiple concurrent readers trash this for each other, as there's only one-per-file.


ZFS did a lot better. There did not appear to be any drop-off after the first 
process. There was a drop in I/O rate as I kept adding processes, but in that 
case the CPU was at 100%. I haven't had a chance to test this on a bigger box, 
but I suspect ZFS is able to keep the sequential read going at full speed (at 
least if the blocks happen to be written sequentially).

ZFS caches multiple readahead states - see the leading comment in
usr/src/uts/common/fs/zfs/vdev_cache.c in your favourite workspace.

FrankH.

I did these tests with each process being a "dd if=bigfile of=/dev/null" started at the same time, 
and I measured I/O rate with "zpool iostat mypool 2" and "iostat -Md 2".


This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to