OK then, I guess my next question would be what's the best way to
"undedupe" the data I have? 

Would it work for me to zfs send/receive
on the same pool (with dedup off), deleting the old datasets once they
have been 'copied'? I think I remember reading somewhere that the DDT
never shrinks, so this would not work, but it would be the simplest way.


Otherwise, I would be left with creating another pool or destroying
and restoring from a backup, neither of which is ideal. 

On 2013-02-04
15:37, Jim Klimov wrote: 

> On 2013-02-04 15:52, Edward Ned Harvey 
>
(opensolarisisdeadlongliveopensolaris) wrote:
>>> I noticed that
sometimes I had terrible rates with < 10MB/sec. Then later it rose up to
< 70MB/sec.
>> Are you talking about scrub rates for the complete scrub?
Because if you sit there and watch it, from minute to minute, it's
normal for it to bounce really low for a long time, and then really high
for a long time, etc. The only measurement that has any real meaning is
time to completion.
> To paraphrase, the random IOs on HDDs are slow -
these are multiple reads of small blocks dispersed on the disk, be it
small files or copies of metadata or seeks into the DDT. Fast reads are
large sequentially stored files, i.e. when a scrub hits an ISO image or
a movie on your disk, or a series of smaller files from the same
directory than happened to be created and saved in the same TXG or so,
and their userdata was queued to disk as a large sequential blob in a
"coalesced" write operation. HTH, //Jim
_______________________________________________ zfs-discuss mailing list
zfs-discuss@opensolaris.org [1]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss [2]




Links:
------
[1] mailto:zfs-discuss@opensolaris.org
[2]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to