Dave Johnson wrote:
> "roland" <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
>  >
>  > there is also no filesystem based approach in 
> compressing/decompressing a whole filesystem. you can have 499gb of data 
> on a 500gb partition - and if you need some more space you would think 
> turning on compression on that fs would solve your problem. but 
> compression only affects files which are new. i wished there was some 
> zfs set compression=gzip <zfs> , zfs compress <fs>, zfs uncompress <fs> 
> and it would be nice if we could get compresion information for single 
> files. (as with ntfs)
>  
> one could kludge this by setting the compression parameters desired on 
> the tree then using a perl script to walk the tree, copying each file to 
> a tmp file, renaming the original to an arbitrary name, renaming the tmp 
> to the name of the original, then updating the new file with the 
> original file's metadata, do a checksum sanity check, then delete the 
> uncompressed original.

This solution has been proposed several times on this forum.
It is simpler to use an archiving or copying tool (tar, cpio, pax,
star, cp, rsync, rdist, install, zfs send/receive et.al.) to copy
the tree once, then rename the top directory.  It makes no sense to
me to write a copying tool in perl or shell.  KISS :-)
  -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to