Hi all, I have a test system with a large amount of filesystems which we take snapshots of and do send/recvs with.
On our test machine, we have 1800+ filesystems and about 5,000 snapshots.The system has 48GB of RAM, and 8 cores (x86). The filesystem is comprised of 2 regular 1TB in a mirror with a 320GB FusionIO flash card acting as a ZIL and read cache. We've noticed that on systems with just a handful of filesystems, ZFS send (recursive) is quite quick, but on our 1800+ fs box, it's horribly slow. For example, root@testbox:~# zfs send -R chunky/0@async-2011-02-28-15:11:20| pv -i 1 > /dev/null 2.51GB 0:04:57 [47.4kB/s] [ <=> ] ^C The other odd thing I've noticed is that during the 'zfs send' to /dev/null, zpool iostat shows we're actually *writing* to the zpool at the rate of 4MB-8MB/s, but reading almost nothing. How can this be the case? So I'm left with 2 questions - 1.) Does ZFS get immensely slow once we have thousands of filesystems? 2.) Why do we see 4MB-8MB/s of *writes* to the filesystem when we do a 'zfs send' to /dev/null ? -Moazam _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss