Well, as I wrote in other threads - i have a pool named "pool" on physical 
disks, and a compressed volume in this pool which i loopback-mount over iSCSI 
to make another pool named "dcpool".

When files in "dcpool" are deleted, blocks are not zeroed out by current ZFS 
and they are still allocated for the physical "pool". Now i'm doing essentially 
this to clean up the parent pool:
# dd if=/dev/zero of=/dcpool/nodedup/bigzerofile

This file is in a non-deduped dataset, so to the point of view of dcpool, it 
has a growing huge file filled with zeroes - and its referenced blocks 
overwrite garbage left over from older deleted files and no longer referenced 
by "dcpool". However for the "pool" this is a write of compressed zeroed block, 
which is not to be referenced, so the "pool" releases a volume block and its 
referencing metadata block.

This has already released over half a terabyte in my physical pool (compressed 
blocks filled with zeroes are a special case for ZFS and require none or 
less-than-usual reference metadata blocks) ;)

However, since I have millions of 4kb blocks for volume data and its metadata, 
I guess fragmentation is quite high, maybe even interlacing one-to-one? One way 
or another, this "dcpool" never saw IOs faster that say 15Mb/s, and usually 
lingers in 1-5Mb/s range, while I can get 30-50Mb/s in the "pool" easily in 
other datasets (with dynamic block sizes and lengthier contiguous data 
stretches).

Writes had been relatively quick for the first virtual terabyte or so, but it's 
doing the last 100gb for several days now, at several megabytes per minute in 
the "dcpool" iostat. There's several Mb/sec of IO's on hardware disks to back 
this deletion and clean-up, however (as in my examples in previous post)...

As for disks with different fill ratio - it is a commonly discussed performance 
problem. Seems to boil down to this: free space on all disks (actually on 
top-level VDEVs) is considered for round-robining writes to stripes. Disks that 
have been in use for a longer time may have very fragmented free space on one 
hand, and not so much of it on another, but ZFS is still trying to push bits 
around evenly. And while it's waiting on some disks, others may be blocked as 
well. Something like that...

People on this forum have seen and reported that adding a 100Mb file tanked 
their multiterabyte pool's performance, and removing the file boosted it back 
up.

I don't want to mix up other writers' findings, better search recent 5-10 pages 
of the forum post headings yourself. It's within the last hundred of threads, I 
think, maybe ;)
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to