I have a simple RAID1 configuration on wd0, wd1. I was in the process of performing a rebuild on wd1, as it failed during some heavy reads. During the rebuild wd0 went into a failure state. After some troubleshooting I decided to reboot and now my RAID disk, sd1, is unavailable. Disks wd0 and wd1 don't show any errors, but I have a replacement disk. I have backups for the critical data and I'd like to try and recover as much recent data as possible. My thought was to create a disk image of the "/home/public" data and mount it using vnconfig, but I seem to be having issues with the appropriate 'dd' command to do that.
How can I recover as much data as possible off the failed RAID array. If I recreate the array, "bioctl -c 1 -l /dev/wd0d,/dev/wd1d softraid0", will the existing data be preserved? root@host# disklabel wd0 # /dev/rwd0c: type: ESDI disk: ESDI/IDE disk label: WDC WD4001FAEX-0 duid: acce36f25df51c8c flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 486401 total sectors: 7814037168 boundstart: 64 boundend: 4294961685 drivedata: 0 16 partitions: # size offset fstype [fsize bsize cpg] c: 7814037168 0 unused d: 7814037104 64 RAID root@host# more /var/backups/disklabel.sd1.backup # /dev/rsd1c: type: SCSI disk: SCSI disk label: SR RAID 1 duid: 8ec2330eabf7cd26 flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 486401 total sectors: 7814036576 boundstart: 64 boundend: 7814036576 drivedata: 0 16 partitions: # size offset fstype [fsize bsize cpg] a: 2147488704 64 4.2BSD 8192 65536 1 # /home/public/ c: 7814036576 0 unused d: 5666547712 2147488768 4.2BSD 8192 65536 1 # /home/Backups/