On Tue, Mar 2, 2010 at 7:15 AM, Kjetil Torgrim Homme <kjeti...@linpro.no>wrote:
> "valrh...@gmail.com" <valrh...@gmail.com> writes: > > > I have been using DVDs for small backups here and there for a decade > > now, and have a huge pile of several hundred. They have a lot of > > overlapping content, so I was thinking of feeding the entire stack > > into some sort of DVD autoloader, which would just read each disk, and > > write its contents to a ZFS filesystem with dedup enabled. [...] That > > would allow me to consolidate a few hundred CDs and DVDs onto probably > > a terabyte or so, which could then be kept conveniently on a hard > > drive and archived to tape. > > it would be inconvenient to make a dedup copy on harddisk or tape, you > could only do it as a ZFS filesystem or ZFS send stream. it's better to > use a generic tool like hardlink(1), and just delete files afterwards > with > > Why would it be inconvenient? This is pretty much exactly what ZFS + dedupe is perfect for. Since dedupe is pool-wide, you could create individual filesystems for each DVD. Or use just 1 filesystem with sub-directories. Or just one filesystem with snapshots after each DVD is copied over top. The data would be dedupe'd on write, so you would only have 1 copy of unique data. To save it to tape, just "zfs send" it, and save the stream file. ZFS dedupe would also work better than hardlinking files, as it works at the block layer, and will be able to dedupe partial files. -- Freddie Cash fjwc...@gmail.com
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss