On 19 January, 2013 - Jim Klimov sent me these 2,0K bytes:

> Hello all,
>
>   While revising my home NAS which had dedup enabled before I gathered
> that its RAM capacity was too puny for the task, I found that there is
> some deduplication among the data bits I uploaded there (makes sense,
> since it holds backups of many of the computers I've worked on - some
> of my homedirs' contents were bound to intersect). However, a lot of
> the blocks are in fact "unique" - have entries in the DDT with count=1
> and the blkptr_t bit set. In fact they are not deduped, and with my
> pouring of backups complete - they are unlikely to ever become deduped.

Another RFE would be 'zfs dedup mypool/somefs' and basically go through
and do a one-shot dedup. Would be useful in various scenarios. Possibly
go through the entire pool at once, to make dedups intra-datasets (like
"the real thing").

/Tomas
-- 
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to