Hi.

Just a wild guess ...
Do you tried rsync?
(Although I don't know how rsync deals with _hard_ links).

HTH,

Andreas.


On 12/15/06, Matthias Bertschy <[EMAIL PROTECTED]> wrote:
OpenBSD 3.7 - i386
Pentium 4 3GHz - 1GB RAM - 2GB swap

Hello list,

For the past 3 weeks, I have been working on a difficult problem: moving
a backuppc (http://backuppc.sourceforge.net/) pool from a RAID0 to a big
disk, in order to free the RAID0 before rebuilding a RAID5.

The RAID0 has one partition, its size is 2112984700 blocks (512-blocks),
roughly 1008GB, which is close to the maximum allowed by ffs. The big
disk is 300GB.

I need to move 96GB of data which are, due to backuppc design, full of
hardlinks!

So far, I have tried to use:
    1) dd: impossible because the partitions cannot be the same size
(and the RAID5 won't be the same size as the RAID0)
    2) pax -rw: after transferring almost 70GB, it bails out with a
"Segmentation fault"
    3) tar to archive: after something like 60GB, it complains with some
"file name too long" errors
    4) gtar to archive (from package gtar-1.15.1p0-static.tgz): ends up
with a "gtar: memory exhauted" error
    5) dump to file: successful but
    5') restore from file: stops even before starting due to a "no
memory for entry table" error (there is still a lot of unused memory and
swap - and no ulimit)

Any help is appreciated because I really don't know what to do next.

Matthias Bertschy
Echo Technologies SA




--
Hobbes : Shouldn't we read the instructions?
Calvin : Do I look like a sissy?

Reply via email to