I attempted to reproduce this issue in a VM and wasn't succesful. I
essentially copied the tgtbasedmpaths test but made the backing disk
bcache cache before setting up the targets.

1) create vm
$ lxc launch ubuntu-daily:noble --vm n-vm
$ lxc shell n-vm
# apt install -y lsscsi multipath-tools open-iscsi tgt

2) setup virtual multipathd disk
```
targetname="iqn.2016-11.foo.com:target.iscsi"
cwd=$(pwd)
testdir="/mnt/tgtmpathtest"
localhost="127.0.0.1"
portal="${localhost}:3260"
maxpaths=4
backfn="backingfile"
expectwwid="60000000000000000e00000000010001"
testdisk="/dev/disk/by-id/wwn-0x${expectwwid}"

### Setup mpath devices

# Restart tgtd to make sure modules are all loaded
service tgt restart || echo "Failed to restart tgt" >&2

# prep SINGLE test file
truncate --size 100M ${backfn}

make-bcache -C ${backfn}

# create target
tgtadm --lld iscsi --op new --mode target --tid 1 -T "${targetname}"
# allow all tomake-bcache -C ${backfn}
 bind the target
tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL
# set backing file
tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b 
"${cwd}/${backfn}"

# scan for targets (locally)
iscsiadm --mode discovery --type sendtargets --portal ${localhost}

# login
echo "login #1"
iscsiadm --mode node --targetname "${targetname}" --portal ${portal} --login
# duplicate this session (always 1)
for i in $(seq 2 ${maxpaths})
do
    echo "extra login #${i}"
    iscsiadm --mode session -r 1 --op new
done

udevadm settle
sleep 5 # sleep a bit to allow device to be created.
```

And can confirm the disks are using bcache FS by:

# udevadm info /dev/dm-0 | grep ID_FS_TYPE
E: ID_FS_TYPE=bcache

Things look fine and as they should to me
# lsblk
NAME     MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sda        8:0    0   10G  0 disk  
├─sda1     8:1    0    9G  0 part  /
├─sda14    8:14   0    4M  0 part  
├─sda15    8:15   0  106M  0 part  /boot/efi
└─sda16  259:0    0  913M  0 part  /boot
sdb        8:16   0  100M  0 disk  
└─mpatha 252:0    0  100M  0 mpath 
sdc        8:32   0  100M  0 disk  
└─mpatha 252:0    0  100M  0 mpath 
sdd        8:48   0  100M  0 disk  
└─mpatha 252:0    0  100M  0 mpath 
sde        8:64   0  100M  0 disk  
└─mpatha 252:0    0  100M  0 mpath 
# multipath -ll
mpatha (360000000000000000e00000000010001) dm-0 IET,VIRTUAL-DISK
size=100M features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 10:0:0:1 sde 8:64 active ready running
|-+- policy='service-time 0' prio=1 status=enabled
| `- 7:0:0:1  sdb 8:16 active ready running
|-+- policy='service-time 0' prio=1 status=enabled
| `- 8:0:0:1  sdc 8:32 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
  `- 9:0:0:1  sdd 8:48 active ready running
# ls /dev/mapper
control  mpatha


I'm not sure if I'm setting up this test incorrectly or not.

In fact if I run `udevadm test` I see the bcache udev rules fail.
# udevadm test /dev/mapper/mpatha
< cut here >
dm-0: /usr/lib/udev/rules.d/69-bcache.rules:21 RUN 'kmod load bcache'
dm-0: /usr/lib/udev/rules.d/69-bcache.rules:22 RUN 'bcache-register $tempnode'
dm-0: /usr/lib/udev/rules.d/69-bcache.rules:26 Importing properties from 
results of 'bcache-export-cached /dev/dm-0'
dm-0: Starting 'bcache-export-cached /dev/dm-0'
Successfully forked off '(spawn)' as PID 2197.
dm-0: Process 'bcache-export-cached /dev/dm-0' failed with exit code 1.
dm-0: /usr/lib/udev/rules.d/69-bcache.rules:26 Command "bcache-export-cached 
/dev/dm-0" returned 1 (error), ignoring
< cut here >

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1887558

Title:
  Multipath JBOD storage devices are not shown via /dev/mapper but each
  path as a single device.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bcache-tools/+bug/1887558/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to