We've also seen what appears to be the same issue on RHEL8 with
targetcli and the kernel target implementation -
https://bugzilla.redhat.com/show_bug.cgi?id=2136544.

** Bug watch added: Red Hat Bugzilla #2136544
   https://bugzilla.redhat.com/show_bug.cgi?id=2136544

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1993312

Title:
  iSCSI initiator cannot recover after target restart with 100s of
  sessions

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We have 2 physical servers on Ubuntu 22.04 with all the latest
  updates, running on kernel 5.15.0-50-generic - “s23” (iSCSI target),
  and “s24” (iSCSI initiator).

  On s23 we setup a large amount of iSCSI targets (1024) using tgtd:

  #!/bin/bash
  [[ -z $TARGETS ]] && TARGETS=1024
  [[ -z $BASEDIR ]] && BASEDIR="/mnt"
  [[ -z $BASENAME ]] && BASENAME="iqn.2022-10.example.com"
  [[ -z $CFGFILE ]] && CFGFILE="/etc/tgt/conf.d/reproduce.conf"

  apt-get update && apt-get install -y tgt
  for tgt in $(seq "$TARGETS"); do
          name="tgt-${tgt}"
          disk="${BASEDIR}/${name}.disk"
          truncate -s 512M "$disk"
          cat <<EOF >> "$CFGFILE"
  <target ${BASENAME}:${name}>
       backing-store ${disk}
  </target>
  EOF
  done
  systemctl restart tgt.service

  This assumes the file system on /mnt is large enough to hold
  1024*512MB files, but the size of the "drives" really shouldn't
  matter.

  Then, on s24, we discover and log into all targets (which takes a while):
  iscsiadm --mode discovery --op update --type sendtargets --portal 
10.1.7.23:3260 # 10.1.7.23 = s23
  iscsiadm -m node --login all

  After the login procedure is done, the target server on s23 is restarted:
  root@s23:~# time systemctl restart tgt.service
  real    0m58.765s
  user    0m0.000s
  sys     0m0.009s

  This takes about a minute on average. What usually happens on the
  initiator is that the vast majority (or sometimes all) of the block
  devices go into a “blocked” state (as per
  ‘/sys/block/sd*/device/state’), and after a while reach “transport-
  offline”.

  In the really bad cases there is no iSCSI traffic between the servers
  at all. Attempts to log out the sessions or log them in anew do
  nothing even if iscsiadm prints “Logging out of session …” messages.
  The only way to recover the initiator after this is to reboot it.

  Also, trying to use the “iscsiadm -m session -P3” command hangs with the 
following output:
  root@s24:~# iscsiadm -m session -P3
  iSCSI Transport Class version 2.0-870
  version 2.1.5
  Target: iqn.2022-10.example.com:tgt-915 (non-flash)
          Current Portal: 10.1.7.23:3260,1
          Persistent Portal: 10.1.7.23:3260,1
                  **********
                  Interface:
                  **********
                  Iface Name: default
                  Iface Transport: tcp
                  Iface Initiatorname: iqn.2004-10.com.ubuntu:01:84de25ddfc37
                  Iface IPaddress: 10.1.7.24
                  Iface HWaddress: default
                  Iface Netdev: default
                  SID: 1026

  When running iscsiadm with strace, it seems to get stuck polling for a 
response:
  socket(AF_UNIX, SOCK_STREAM, 0)         = 3
  connect(3, {sa_family=AF_UNIX, sun_path=@"ISCSIADM_ABSTRACT_NAMESPACE"}, 30) 
= 0
  write(3, "\r\0\0\0\0\0\0\0\2\4\0[...]”, 16104) = 16104
  poll([{fd=3, events=POLLIN}], 1, 1000)  = 0 (Timeout)

  And here are the iscsid logs from around the time of the restart:
  Oct 18 12:41:45 s24 iscsid[29702]: Kernel reported iSCSI connection 1318:0 
error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) state (3)
  Oct 18 12:41:45 s24 iscsid[29702]: Kernel reported iSCSI connection 1929:0 
error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) state (3)
  Oct 18 12:42:37 s24 iscsid[29702]: connection1177:0 is operational after 
recovery (4 attempts)
  Oct 18 12:42:37 s24 iscsid[29702]: connection2003:0 is operational after 
recovery (4 attempts)
  Oct 18 12:42:37 s24 iscsid[29702]: connection2007:0 is operational after 
recovery (4 attempts)
  Oct 18 12:42:37 s24 iscsid[29702]: connection1364:0 is operational after 
recovery (4 attempts)
  Oct 18 12:42:37 s24 iscsid[29702]: connection1117:0 is operational after 
recovery (4 attempts)
  Oct 18 12:42:37 s24 iscsid[29702]: connection1029:0 is operational after 
recovery (4 attempts)
  Oct 18 12:42:37 s24 iscsid[29702]: connection1732:0 is operational after 
recovery (4 attempts)
  Oct 18 12:42:37 s24 iscsid[29702]: connection1601:0 is operational after 
recovery (4 attempts)
  Oct 18 12:42:37 s24 iscsid[29702]: connection1115:0 is operational after 
recovery (4 attempts)
  Oct 18 12:42:37 s24 iscsid[29702]: can not read from NL socket, error -1

  I believe the netlink socket error is a good indication that the issue
  has occurred.

  This is quite consistently reproducible using the default settings in
  iscsid.conf. Increasing the ‘node.session.timeo.replacement_timeout’
  parameter to 240-300s reduces the chance that it will happen, but does
  not eliminate it - it can still happen after 2-3 restarts of
  tgt.service, and it is not really an acceptable value in more advanced
  situations such as when using multipath. We have seen similar issues
  there as well, but this should be the easiest to reproduce.

  The version of open-iscsi is 2.1.5-1ubuntu1 and tgt is
  1:1.0.80-1ubuntu2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1993312/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to