Hello Abel,

On 31/10/2005, at 10:23 PM, Abel Talaversn Estevez wrote:

If I make the backup with 'dd if=/dev/wd0c of=/image bs=512' the image is a file of about 2 GB because the hard disk is of 40 GB. But with a 'du -sh /' I
can see that all files are only 221 MB.

The file is probably 2GB because that is the largest a single file can
be on the file system you are saving it to.

How could I do it to achieve a smaller image? The last option is using 'tar'
but I prefer to have an image. Is it possible?

To save file system images from BSD's, I use:

dd bs=64k if=/dev/???? | gzip | split -b 640m - backup.dd.gz.

This gives me 640 MByte chunks of a gzip compressed image of /dev/????
The files start with the suffix .aa and increment alphabetically like:
.ab .ac etc.


To restore I use:

cat backup.dd.gz.* | gunzip | dd bs=64k of=/dev/????


I choose 640 Mbyte chunks, because they seem to be a good compromise for
a size which fits well on both CDR's (1) and DVDR's (7), without too
much waste.

I choose a block size of 64kb because it seems to provide the fastest
transfer rates for me. Testing this now, using a 512 byte block size I
get about 3Mb/s regardless of whether gzip is being used or not. But
using a block size of 64kbyte I get rates which range from 10-36Mb/s,
depending on how compressible the data on the disk is. 36MB/s seems
to be the fastest rate this disk can sustain.

I fill Unix file systems with a big file full of zeroes and then delete
that file, so that gzip can do a good job with areas of the file system
which held old less-compressible data. For Windows file systems I use
Eraser to do the same.

http://www.tolvanen.com/eraser/


Shane J Pearson

Reply via email to