Bob - Have you filed a bug on this issue?
I am not up to speed on this thread, so I can
not comment on whether or not there is a bug
here, but you seem to have a test case and supporting
data. Filing a bug will get the attention of ZFS
engineering.
Thanks,
/jim
Bob Friesenhahn wrote:
On Mon, 13 Jul 2009, Mike Gerdts wrote:
FWIW, I hit another bug if I turn off primarycache.
http://defect.opensolaris.org/bz/show_bug.cgi?id=10004
This causes really abysmal performance - but equally so for repeat runs!
It is quite facinating seeing the huge difference in I/O performance
from these various reports. The bug you reported seems likely to be
that without at least a little bit of caching, it is necessary to
re-request the underlying 128K ZFS block several times as the program
does numerous smaller I/Os (cpio uses 10240 bytes?) across it. Totally
disabling data caching seems best reserved for block-oriented
databases which are looking for a substitute for directio(3C).
It is easily demonstrated that the problem seen in Solaris 10 (jury
still out on OpenSolaris although one report has been posted) is due
to some sort of confusion. It is not due to delays caused by purging
old data from the ARC. If these delays were caused by purging data
from the ARC, then 'zfs iostat' would start showing lower read
performance once the ARC becomes full, but that is not the case.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us,
http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss