Added an SRU template to the bug description. ** Description changed:
+ [Impact] + + * The number of available AIO contexts is severely limited + on systems with a large number of possible CPUs + (e.g., IBM POWER8 processors w/ 20ish cores * 8 threads/core, + and other multithreaded server-class processors). + + * This prevents application such as multipath/directio checker + to provide all of the available devices to the system. + + * Other applications which depend on AIO can be affected/limited. + + * The patch fixes how aio increments the number of active contexts + (seen in /proc/sys/fs/aio-nr) and checks that against the global + limit (seen in /proc/sys/fs/aio-max-nr). + + [Test Case] + + * A synthetic test-case is attached (io_setup_v2.c) and demonstrated + (original/patched kernels) in comment #4. + + * Trying to perform multipath discovery in debug/verbose mode + (i.e., "multipath -v3" command) with sufficient number of + individual paths using the "directio" path checker should + demonstrate the problem/solution as well (i.e., presence or + not of "io_setup failed" messages). + + [Regression Potential] + + * Note the fix is trivial and has been tested by several users, + even caused the introduction of a new test-case in "libaio"; + (but that can never be a strong enough reason for no more errors). + + * Applications which use aio with small "nr_events" value as argument + to "io_setup()" now have access to a much larger number of aio contexts; + but hopefully those apps are already only requesting what they need, + not trying to get more and more. + + * Applications which relied in the _incorrect_ behavior of '/proc/sys/fs/aio-nr' + being possibly greater than '/proc/sys/fs/aio-max-nr' might have problems, + but those apps should be fixed. + + Problem Description ================================= I am facing this issue for Texan Flash storage 840 disks which are coming from coho and salfish adapter coho adapter with 840 storage is 3G disks and salfish adapter with 840 is 12G disks I am able to see those disks in lsblk o/p but not in multipath -ll comamnd 0004:01:00.0 Coho: Saturn-X U78C9.001.WZS0060-P1-C6 0x10000090fa2a51f8 host10 Online 0004:01:00.1 Coho: Saturn-X U78C9.001.WZS0060-P1-C6 0x10000090fa2a51f9 host11 Online 0005:09:00.0 Sailfish: QLogic 8GB U78C9.001.WZS0060-P1-C9 0x21000024ff787778 host2 Online 0005:09:00.1 Sailfish: QLogic 8GB U78C9.001.WZS0060-P1-C9 0x21000024ff787779 host4 Online root@luckyv1:/dev/disk# multipath -ll | grep "size=3.0G" -B 1 root@luckyv1:/dev/disk# multipath -ll | grep "size=12G" -B 1 root@luckyv1:/dev/disk# == Comment: #3 - Luciano Chavez <cha...@us.ibm.com> - 2016-09-20 20:22:20 == I edited /etc/multipath.conf and added verbosity 6 to crank up the output and ran multipath -ll and saved it off to a text file (attached). All the using the directio checker failed and those using the tur checker seem to work. Sep 20 20:07:36 | loading //lib/multipath/libcheckdirectio.so checker Sep 20 20:07:36 | loading //lib/multipath/libprioconst.so prioritizer Sep 20 20:07:36 | Discover device /sys/devices/pci0000:00/0000:00:00.0/0000:01:00.0/host3/rport-3:0-2/target3:0:0/3:0:0:0/block/sdai Sep 20 20:07:36 | sdai: udev property ID_WWN whitelisted Sep 20 20:07:36 | sdai: not found in pathvec Sep 20 20:07:36 | sdai: mask = 0x25 Sep 20 20:07:36 | sdai: dev_t = 66:32 Sep 20 20:07:36 | open '/sys/devices/pci0000:00/0000:00:00.0/0000:01:00.0/host3/rport-3:0-2/target3:0:0/3:0:0:0/block/sdai/size' Sep 20 20:07:36 | sdai: size = 20971520 Sep 20 20:07:36 | sdai: vendor = IBM Sep 20 20:07:36 | sdai: product = FlashSystem-9840 Sep 20 20:07:36 | sdai: rev = 1442 Sep 20 20:07:36 | sdai: h:b:t:l = 3:0:0:0 Sep 20 20:07:36 | SCSI target 3:0:0 -> FC rport 3:0-2 Sep 20 20:07:36 | sdai: tgt_node_name = 0x500507605e839800 Sep 20 20:07:36 | open '/sys/devices/pci0000:00/0000:00:00.0/0000:01:00.0/host3/rport-3:0-2/target3:0:0/3:0:0:0/state' Sep 20 20:07:36 | sdai: path state = running Sep 20 20:07:36 | sdai: get_state Sep 20 20:07:36 | sdai: path_checker = directio (internal default) Sep 20 20:07:36 | sdai: checker timeout = 30 ms (internal default) Sep 20 20:07:36 | io_setup failed Sep 20 20:07:36 | sdai: checker init failed == Comment: #7 - Mauricio Faria De Oliveira <mauri...@br.ibm.com> - 2016-09-27 18:32:57 == The function is failing at the io_setup() system call. @ checkers/directio.c int libcheck_init (struct checker * c) { unsigned long pgsize = getpagesize(); struct directio_context * ct; long flags; ct = malloc(sizeof(struct directio_context)); if (!ct) return 1; memset(ct, 0, sizeof(struct directio_context)); if (io_setup(1, &ct->ioctx) != 0) { condlog(1, "io_setup failed"); free(ct); return 1; } <...> The syscall is failing w/ EAGAIN # grep ^io_setup multipath_-v2_-d.strace io_setup(1, 0x100163c9130) = -1 EAGAIN (Resource temporarily unavailable) io_setup(1, 0x10015bae2c0) = -1 EAGAIN (Resource temporarily unavailable) io_setup(1, 0x100164d65a0) = -1 EAGAIN (Resource temporarily unavailable) io_setup(1, 0x10016429f20) = -1 EAGAIN (Resource temporarily unavailable) io_setup(1, 0x100163535c0) = -1 EAGAIN (Resource temporarily unavailable) io_setup(1, 0x10016368510) = -1 EAGAIN (Resource temporarily unavailable) <...> According to the manpage (man 2 io_setup) NAME io_setup - create an asynchronous I/O context DESCRIPTION The io_setup() system call creates an asynchronous I/O context suitable for concurrently processing nr_events operations. <...> ERRORS EAGAIN The specified nr_events exceeds the user's limit of available events, as defined in /proc/sys/fs/aio-max-nr. On luckyv1: root@luckyv1:~/mauricfo/bz146849/sep27# cat /proc/sys/fs/aio-max-nr 65536 root@luckyv1:~/mauricfo/bz146849/sep27# cat /proc/sys/fs/aio-nr 130560 According to linux's Documentation/sysctl/fs.txt [1] aio-nr & aio-max-nr: aio-nr is the running total of the number of events specified on the io_setup system call for all currently active aio contexts. If aio-nr reaches aio-max-nr then io_setup will fail with EAGAIN. Note that raising aio-max-nr does not result in the pre-allocation or re-sizing of any kernel data structures. Interestingly, aio-nr is greater than aio-max-nr. Hm. Increased aio-max-nr to 262144, and could get some more maps created. - == Comment: #8 - Mauricio Faria De Oliveira <mauri...@br.ibm.com> - 2016-09-27 18:56:08 == This attached test-case demonstrates that for each io_setup() request of 1 nr_event, actually 1280 seem to be allocated. root@luckyv1:~/mauricfo/bz146849/sep27# gcc -o io_setup io_setup.c -laio root@luckyv1:~/mauricfo/bz146849/sep27# cat /proc/sys/fs/aio-nr 0 root@luckyv1:~/mauricfo/bz146849/sep27# ./io_setup & [1] 12352 io_setup rc = 0 sleeping 10 seconds... root@luckyv1:~/mauricfo/bz146849/sep27# cat /proc/sys/fs/aio-nr 1280 <...> io_destroy rc = 0 [1]+ Done ./io_setup root@luckyv1:~/mauricfo/bz146849/sep27# cat /proc/sys/fs/aio-nr 0 == Comment: #45 - Mauricio Faria De Oliveira <mauri...@br.ibm.com> - 2017-09-19 18:32:10 == Verification of this commit with the linux-hwe-edge kernel in -proposed, using the attached test-case "io_setup_v2.c" commit 2a8a98673c13cb2a61a6476153acf8344adfa992 Author: Mauricio Faria de Oliveira <mauri...@linux.vnet.ibm.com> Date: Wed Jul 5 10:53:16 2017 -0300 fs: aio: fix the increment of aio-nr and counting against aio- max-nr Test-case (attached) $ sudo apt-get install gcc libaio-dev $ gcc -o io_setup_v2 io_setup_v2.c -laio Original kernel: - Only 409 io_contexts could be allocated, but that took 130880 [ div by 2, per bug] = 65440 slots out of 65535 $ uname -rv 4.11.0-14-generic #20~16.04.1-Ubuntu SMP Wed Aug 9 09:06:18 UTC 2017 $ ./io_setup_v2 1 65536 nr_events: 1, nr_requests: 65536 rc = -11, i = 409 ^Z [1]+ Stopped ./io_setup_v2 1 65536 $ cat /proc/sys/fs/aio-nr 130880 $ cat /proc/sys/fs/aio-max-nr 65536 $ kill %% Patched kernel: - Now 65515 io_contexts could be allocated out of 65535 (much better) (and reporting correctly, without div by 2.) $ uname -rv 4.11.0-140-generic #20~16.04.1+bz146489 SMP Tue Sep 19 17:46:15 CDT 2017 $ ./io_setup_v2 1 65536 nr_events: 1, nr_requests: 65536 rc = -12, i = 65515 ^Z [1]+ Stopped ./io_setup_v2 1 65536 $ cat /proc/sys/fs/aio-nr 65515 $ kill %% -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1718397 Title: multipath -ll is not showing the disks which are actually multipath To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu-power-systems/+bug/1718397/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs