On October 14, Ulrich Windl wrote:
> I was investigating the status of building a RAID1 over iSCSI-
> connected devices managed by multipathd (SLES10 SP3 Release Notes said
> it won't work). Here are some of my findings:
>
> 1) The multipath-devices cannot be opened exclusively my mdadm:
> # mdadm --verbose --create /dev/md0 --raid-devices=2 --level=raid1 --
> bitmap=internal /dev/disk/by-id/
> scsi-3600508b4001085dd0001100002260000 /dev/disk/by-id/
> scsi-3600508b4001085dd0001100002290000
> mdadm: Cannot open /dev/disk/by-id/
> scsi-3600508b4001085dd0001100002260000: Device or resource busy
> mdadm: Cannot open /dev/disk/by-id/
> scsi-3600508b4001085dd0001100002290000: Device or resource busy
> mdadm: create aborted
>
> open("/dev/disk/by-id/scsi-3600508b4001085dd0001100002260000",
> O_RDONLY|O_EXCL) = -1 EBUSY (Device or resource busy)
>
> 2) The device-mapper files seem to be no SCSI Devices:
> # mdadm --verbose --create /dev/md0 --raid-devices=2 --level=raid1 --
> bitmap=internal /dev/dm-18 /dev/dm-19
> mdadm: /dev/dm-18 is too small: 0K
> mdadm: create aborted
> rkdvmso1:~ # sdparm -a /dev/dm-18
> unable to access /dev/dm-18, ATA disk?
>
> 3) The iSCSI devices are SCSI-devices, but are busy:
> # sdparm -a /dev/sdax
> /dev/sdax: HP HSV200 5000
> Read write error recovery mode page:
> AWRE 1 [cha: n, def: 1]
> ARRE 1 [cha: n, def: 1]
> TB 1 [cha: n, def: 1]
> RC 0 [cha: n, def: 0]
> [...]
> # mdadm --verbose --create /dev/md0 --raid-devices=2 --level=raid1 --
> bitmap=internal /dev/sdax /dev/sdbo
> mdadm: Cannot open /dev/sdax: Device or resource busy
> mdadm: Cannot open /dev/sdbo: Device or resource busy
> mdadm: create aborted
>
> I'm not a specialist on mdadm, so please if I did something wrong,
> please tell me.
Hi,
I have been looking at related but not identical problem. I'm trying
to use md to replicate local disk to remote server by iSCSI and
mirroring (RAID1). But I noticed that iSCSI commands fail if network
timeout occurs longer than the iSCSI command timeout. I noticed that
the block device created by open-iscsi is marked as non-removable
(RMB=0). Why does open-iscsi behave this way and why does it not
report disk removal event if network connection fails ?
# mdadm --query --detail /dev/md4 | tail -n 3
Number Major Minor RaidDevice State
0 8 32 0 active sync /dev/sdc
1 8 64 1 active sync /dev/sde
# sg_inq /dev/disk/by-path/ip-192.168.3.114\:3260-iscsi-iqn\:tgt-lun-0
| grep RMB
PQual=0 Device_type=0 RMB=0 version=0x05 [SPC-3]
Fubo.
--
You received this message because you are subscribed to the Google Groups
"open-iscsi" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/open-iscsi?hl=en.