Hi All,

We are facing a stale link (of the device) issue during the iscsi-logout 
process if we use parted command just before the iscsi logout. Here are the 
details:
                 
As part of iscsi logout, the partitions and the disk will be removed. The 
parted command, used to list the partitions, will open the disk in RW mode 
which results in systemd-udevd re-reading the partitions. This will trigger the 
rescan partitions which will also delete and re-add the partitions. So, both 
iscsi logout processing and the parted (through systemd-udevd) will be involved 
in add/delete of partitions. In our case, the following sequence of operations 
happened (the iscsi device is /dev/sdb with partition sdb1):
        
        1. sdb1 was removed by PARTED
        2. kworker, as part of iscsi logout, couldn't remove sdb1 as it was 
already removed by PARTED
        3. sdb1 was added by parted
        4. sdb was NOW removed as part of iscsi logout (the last part of the 
device removal after remoing the partitions)

Since the symlink /sys/class/block/sdb1 points to 
/sys/class/devices/platform/hostx/sessionx/targetx:x:x:x/x:x:x:x/block/sdb/sdb1 
and since sdb is already removed, the symlink /sys/class/block/sdb1 will be 
orphan and stale. So, this stale link is a result of the race condition in 
kernel between the systemd-udevd and iscsi-logout processing as described 
above. We are able to reproduce this even with latest upstream kernel.
        
We have come across a patch from Ming Lei which was created for "avoid to drop 
& re-add partitions if partitions aren't changed":
https://lore.kernel.org/linux-block/20210216084430.ga23...@lst.de/T/
        
This patch could resolve our problem of stale link but it just seems to be a 
work-around and not the actual fix for the race. We were looking for help to 
fix this race in kernel. Do you have any idea how to fix this race condition?
        
Following is the script we are using to reproduce the issue:
        
#!/bin/bash
  
dir=/sys/class/block
iter_count=0
while [ $iter_count -lt 10000000 ]; do
    iscsiadm -m node -T iqn.2016-01.com.example:target1 -p 100.100.242.162:3260 
-l

    poll_loop=0
    while [ ! -e /sys/class/block/sdb1 ]; do
        ls  -i -l /sys/class/block/sd* > /dev/null
        let poll_loop+=1
        if [ $poll_loop -gt 1000000 ]; then
            ls  -i -l /sys/class/block/sd* --color
            exit 1
        fi
    done

    ls -i -l /sys/class/block/sd* --color
    mount /dev/sdb1 /mnt
    dd of=/mnt/sdb1 if=/dev/sdb2 bs=1M count=100 &
    pid_01=$!
    wait $pid_01
    umount -l /mnt &
    pid_02=$!
    wait $pid_02

    parted /dev/sdb -s print
    iscsiadm -m node -T iqn.2016-01.com.example:target1 -p 100.100.242.162:3260 
-u &
    pid_1=$!

    iscsiadm -m node -T iqn.2016-01.com.example:target2 -p 100.100.242.162:3260 
-l &
    pid_2=$!

    sleep 1
    ls -i -l /sys/class/block/sd* --color

    for i in `ls  $dir`; do
        if [ ! -e $dir/$i ]; then
            echo "broken link: $dir/$i"
            exit 1
        fi
    done

    parted /dev/sdb -s print
    iscsiadm -m node -T iqn.2016-01.com.example:target2 -p 100.100.242.162:3260 
-u
    iter_count=`expr $iter_count + 1`
done


Regards,
Gulam Mohamed.

Reply via email to