On Sat, 4 Jul 2009, Phil Harman wrote:

ZFS doesn't mix well with mmap(2). This is because ZFS uses the ARC instead of the Solaris page cache. But mmap() uses the latter. So if anyone maps a file, ZFS has to keep the two caches in sync.

cp(1) uses mmap(2). When you use cp(1) it brings pages of the files it copies into the Solaris page cache. As long as they remain there ZFS will be slow for those files, even if you subsequently use read(2) to access them.

This is very interesting information and certainly can explain a lot. My application has a choice of using mmap or traditional I/O. I often use mmap. From what you are saying, using mmap is poison to subsequent performance.

On June 29th I tested my application (which was set to use mmap) shortly after a reboot and got this overall initial runtime:

real  2:24:25.675
user  4:38:57.837
sys     14:30.823

By June 30th (with no intermediate reboot) the overall runtime had increased to

real  3:08:58.941
user  4:38:38.192
sys     15:44.197

which seems like quite a large change.

If you reboot, your cpio(1) tests will probably go fast again, until someone uses mmap(2) on the files again. I think tar(1) uses read(2), but from my

I will test.

The other thing that slows you down is that ZFS only flushes to disk every 5 seconds if there are no synchronous writes. It would be interesting to see iostat -xnz 1 while you are running your tests. You may find the disks are writing very efficiently for one second in every five.

Actually I found that the disks were writing flat out for five seconds at a time which stalled all other pool I/O (and dependent CPU) for at least three seconds (see earlier discussion). So at the moment I have zfs_write_limit_override set to 2684354560 so that the write cycle is more on the order of one second in every five.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to