it works at a pool wide level with the ability to exclude at a dataset level, or the converse, if set to off at top level dataset can then set lower level datasets to on, ie one can include and exclude depending on the datasets contents.

so largefile will get deduped in the example below.

Enda

Breandan Dezendorf wrote:
Does dedup work at the pool level or the filesystem/dataset level?
For example, if I were to do this:

bash-3.2$ mkfile 100m /tmp/largefile
bash-3.2$ zfs set dedup=off tank
bash-3.2$ zfs set dedup=on tank/dir1
bash-3.2$ zfs set dedup=on tank/dir2
bash-3.2$ zfs set dedup=on tank/dir3
bash-3.2$ cp /tmp/largefile /tank/dir1/largefile
bash-3.2$ cp /tmp/largefile /tank/dir2/largefile
bash-3.2$ cp /tmp/largefile /tank/dir3/largefile

Would largefile get dedup'ed?  Would I need to set dedup on for the
pool, and then disable where it isn't wanted/needed?

Also, will we need to move our data around (send/recv or whatever your
preferred method is) to take advantage of dedup?  I was hoping the
blockpointer rewrite code would allow an admin to simply turn on dedup
and let ZFS process the pool, eliminating excess redundancy as it
went.


--
Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to