markm wrote:
> Because the vdev tree is calling them 'disk', zfs is attempting to open
> them using disk i/o instead of file i/o.
This was correct, thank you. lofiadm was useful to loopback mount
the image files to provide disk i/o.
> ZFS has much more opportunity to recover from device failur
> Just for fun, try an absolute path.
Thank you again for the suggestions. I was able to make this work
with lofiadm to mount the images. Then, be sure to give zpool the -d
flag to scan /dev/lofi
# lofiadm -a /jbod1-diskbackup/restore/deep_Lun0.dd
/dev/lofi/1
# lofiadm -a /jbod1-diskbackup/res
On 24 August, 2011 - Kelsey Damas sent me these 1,8K bytes:
> On Wed, Aug 24, 2011 at 1:23 PM, Cindy Swearingen
> wrote:
>
> > I wonder if you need to make links from the original device
> > name to the new device names.
> >
> > You can see from the zdb -l output below that the device path
> > i
On Wed, Aug 24, 2011 at 1:23 PM, Cindy Swearingen
wrote:
> I wonder if you need to make links from the original device
> name to the new device names.
>
> You can see from the zdb -l output below that the device path
> is pointing to the original device names (really long device
> names).
Thank
Hi Kelsey,
I haven't had to do this myself so someone who has done this
before might have a better suggestion.
I wonder if you need to make links from the original device
name to the new device names.
You can see from the zdb -l output below that the device path
is pointing to the original devi
I am in a rather unique situation. I've inherited a zpool composed of
two vdevs. One vdev was roughly 9TB on one RAID 5 array, and the
other vdev is roughly 2TB on a different RAID 5 array.The 9TB
array crashed and was sent to a data recovery firm, and they've given
me a dd image. I've als