[Kernel-packages] [Bug 2029934] Re: arm64 AWS host hangs during modprobe nvidia on lunar and mantic

2024-04-02 Thread Abhishek Chauhan
Hi all,
This should be fixed on the latest driver 550.67 - 
https://www.nvidia.com/Download/driverResults.aspx/223429/en-us/.
Please help verify if this is resolved on your systems. Thanks!

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-hwe-6.5 in Ubuntu.
https://bugs.launchpad.net/bugs/2029934

Title:
  arm64 AWS host hangs during modprobe nvidia on lunar and mantic

Status in linux-aws package in Ubuntu:
  Incomplete
Status in linux-hwe-6.5 package in Ubuntu:
  New
Status in nvidia-graphics-drivers-525 package in Ubuntu:
  Incomplete
Status in nvidia-graphics-drivers-525-server package in Ubuntu:
  Incomplete
Status in nvidia-graphics-drivers-535 package in Ubuntu:
  Confirmed
Status in nvidia-graphics-drivers-535-server package in Ubuntu:
  Confirmed

Bug description:
  Loading the nvidia driver dkms modules with "modprove nvidia" will
  result in the host hanging and being completely unusable. This was
  reproduced using both the linux generic and linux-aws kernels on lunar
  and mantic using an AWS g5g.xlarge instance.

  To reproduce using the generic kernel:
  # Deploy a arm64 host with an nvidia gpu, such as an AWS g5g.xlarge.

  # Install the linux generic kernel from lunar-updates:
  $ sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -o 
DPkg::Options::=--force-confold linux-generic

  # Boot to the linux-generic kernel (this can be accomplished by removing the 
existing kernel, in this case it was the linux-aws 6.2.0-1008-aws kernel)
  $ sudo DEBIAN_FRONTEND=noninteractive apt-get purge -y -o 
DPkg::Options::=--force-confold linux-aws linux-aws-headers-6.2.0-1008 
linux-headers-6.2.0-1008-aws linux-headers-aws linux-image-6.2.0-1008-aws 
linux-image-aws linux-modules-6.2.0-1008-aws  linux-headers-6.2.0-1008-aws 
linux-image-6.2.0-1008-aws linux-modules-6.2.0-1008-aws
  $ reboot

  # Install the Nvidia 535-server driver DKMS package:
  $ sudo DEBIAN_FRONTEND=noninteractive apt-get install -y 
nvidia-driver-535-server

  # Enable the driver
  $ sudo modprobe nvidia

  # At this point the system will hang and never return.
  # A reboot instead of a modprobe will result in a system that never boots up 
all the way. I was able to recover the console logs from such a system and 
found (the full captured log is attached):

  [1.964942] nvidia: loading out-of-tree module taints kernel.
  [1.965475] nvidia: module license 'NVIDIA' taints kernel.
  [1.965905] Disabling lock debugging due to kernel taint
  [1.980905] nvidia: module verification failed: signature and/or required 
key missing - tainting kernel
  [2.012067] nvidia-nvlink: Nvlink Core is being initialized, major device 
number 510
  [2.012715] 
  [   62.025143] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
  [   62.025807] rcu:   3-...0: (14 ticks this GP) 
idle=c04c/1/0x4000 softirq=653/654 fqs=3301
  [   62.026516](detected by 0, t=15003 jiffies, g=-699, q=216 ncpus=4)
  [   62.027018] Task dump for CPU 3:
  [   62.027290] task:systemd-udevd   state:R  running task stack:0 
pid:164   ppid:144flags:0x000e
  [   62.028066] Call trace:
  [   62.028273]  __switch_to+0xbc/0x100
  [   62.028567]  0x228
  Timed out for waiting the udev queue being empty.
  Timed out for waiting the udev queue being empty.
  [  242.045143] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
  [  242.045655] rcu:   3-...0: (14 ticks this GP) 
idle=c04c/1/0x4000 softirq=653/654 fqs=12303
  [  242.046373](detected by 1, t=60008 jiffies, g=-699, q=937 ncpus=4)
  [  242.046874] Task dump for CPU 3:
  [  242.047146] task:systemd-udevd   state:R  running task stack:0 
pid:164   ppid:144flags:0x000f
  [  242.047922] Call trace:
  [  242.048128]  __switch_to+0xbc/0x100
  [  242.048417]  0x228
  Timed out for waiting the udev queue being empty.
  Begin: Loading essential drivers ... [  384.001142] watchdog: BUG: soft 
lockup - CPU#2 stuck for 22s! [modprobe:215]
  [  384.001738] Modules linked in: nvidia(POE+) crct10dif_ce video polyval_ce 
polyval_generic drm_kms_helper ghash_ce syscopyarea sm4 sysfillrect sha2_ce 
sysimgblt sha256_arm64 sha1_ce drm nvme nvme_core ena nvme_common aes_neon_bs 
aes_neon_blk aes_ce_blk aes_ce_cipher
  [  384.003513] CPU: 2 PID: 215 Comm: modprobe Tainted: P   OE  
6.2.0-26-generic #26-Ubuntu
  [  384.004210] Hardware name: Amazon EC2 g5g.xlarge/, BIOS 1.0 11/1/2018
  [  384.004715] pstate: 8045 (Nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
  [  384.005259] pc : smp_call_function_many_cond+0x1b4/0x4b4
  [  384.005683] lr : smp_call_function_many_cond+0x1d0/0x4b4
  [  384.006108] sp : 889a3a70
  [  384.006381] x29: 889a3a70 x28: 0003 x27: 
00056d1fafa0
  [  384.006954] x26: 00056d1d76c8 x25: c87cf18bdd10 x24: 
0003
  [  384.007527] x23: 0001 x22: 00056d1d76c8 x21: 
c87cf18c2690
  [  

[Kernel-packages] [Bug 2029934] Re: arm64 AWS host hangs during modprobe nvidia on lunar and mantic

2024-04-09 Thread Abhishek Chauhan
The fix is also available on 535.171.04 available here -
https://www.nvidia.com/Download/driverResults.aspx/223761/en-us/

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-hwe-6.5 in Ubuntu.
https://bugs.launchpad.net/bugs/2029934

Title:
  arm64 AWS host hangs during modprobe nvidia on lunar and mantic

Status in linux-aws package in Ubuntu:
  Incomplete
Status in linux-hwe-6.5 package in Ubuntu:
  New
Status in nvidia-graphics-drivers-525 package in Ubuntu:
  Incomplete
Status in nvidia-graphics-drivers-525-server package in Ubuntu:
  Incomplete
Status in nvidia-graphics-drivers-535 package in Ubuntu:
  Confirmed
Status in nvidia-graphics-drivers-535-server package in Ubuntu:
  Confirmed

Bug description:
  Loading the nvidia driver dkms modules with "modprove nvidia" will
  result in the host hanging and being completely unusable. This was
  reproduced using both the linux generic and linux-aws kernels on lunar
  and mantic using an AWS g5g.xlarge instance.

  To reproduce using the generic kernel:
  # Deploy a arm64 host with an nvidia gpu, such as an AWS g5g.xlarge.

  # Install the linux generic kernel from lunar-updates:
  $ sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -o 
DPkg::Options::=--force-confold linux-generic

  # Boot to the linux-generic kernel (this can be accomplished by removing the 
existing kernel, in this case it was the linux-aws 6.2.0-1008-aws kernel)
  $ sudo DEBIAN_FRONTEND=noninteractive apt-get purge -y -o 
DPkg::Options::=--force-confold linux-aws linux-aws-headers-6.2.0-1008 
linux-headers-6.2.0-1008-aws linux-headers-aws linux-image-6.2.0-1008-aws 
linux-image-aws linux-modules-6.2.0-1008-aws  linux-headers-6.2.0-1008-aws 
linux-image-6.2.0-1008-aws linux-modules-6.2.0-1008-aws
  $ reboot

  # Install the Nvidia 535-server driver DKMS package:
  $ sudo DEBIAN_FRONTEND=noninteractive apt-get install -y 
nvidia-driver-535-server

  # Enable the driver
  $ sudo modprobe nvidia

  # At this point the system will hang and never return.
  # A reboot instead of a modprobe will result in a system that never boots up 
all the way. I was able to recover the console logs from such a system and 
found (the full captured log is attached):

  [1.964942] nvidia: loading out-of-tree module taints kernel.
  [1.965475] nvidia: module license 'NVIDIA' taints kernel.
  [1.965905] Disabling lock debugging due to kernel taint
  [1.980905] nvidia: module verification failed: signature and/or required 
key missing - tainting kernel
  [2.012067] nvidia-nvlink: Nvlink Core is being initialized, major device 
number 510
  [2.012715] 
  [   62.025143] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
  [   62.025807] rcu:   3-...0: (14 ticks this GP) 
idle=c04c/1/0x4000 softirq=653/654 fqs=3301
  [   62.026516](detected by 0, t=15003 jiffies, g=-699, q=216 ncpus=4)
  [   62.027018] Task dump for CPU 3:
  [   62.027290] task:systemd-udevd   state:R  running task stack:0 
pid:164   ppid:144flags:0x000e
  [   62.028066] Call trace:
  [   62.028273]  __switch_to+0xbc/0x100
  [   62.028567]  0x228
  Timed out for waiting the udev queue being empty.
  Timed out for waiting the udev queue being empty.
  [  242.045143] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
  [  242.045655] rcu:   3-...0: (14 ticks this GP) 
idle=c04c/1/0x4000 softirq=653/654 fqs=12303
  [  242.046373](detected by 1, t=60008 jiffies, g=-699, q=937 ncpus=4)
  [  242.046874] Task dump for CPU 3:
  [  242.047146] task:systemd-udevd   state:R  running task stack:0 
pid:164   ppid:144flags:0x000f
  [  242.047922] Call trace:
  [  242.048128]  __switch_to+0xbc/0x100
  [  242.048417]  0x228
  Timed out for waiting the udev queue being empty.
  Begin: Loading essential drivers ... [  384.001142] watchdog: BUG: soft 
lockup - CPU#2 stuck for 22s! [modprobe:215]
  [  384.001738] Modules linked in: nvidia(POE+) crct10dif_ce video polyval_ce 
polyval_generic drm_kms_helper ghash_ce syscopyarea sm4 sysfillrect sha2_ce 
sysimgblt sha256_arm64 sha1_ce drm nvme nvme_core ena nvme_common aes_neon_bs 
aes_neon_blk aes_ce_blk aes_ce_cipher
  [  384.003513] CPU: 2 PID: 215 Comm: modprobe Tainted: P   OE  
6.2.0-26-generic #26-Ubuntu
  [  384.004210] Hardware name: Amazon EC2 g5g.xlarge/, BIOS 1.0 11/1/2018
  [  384.004715] pstate: 8045 (Nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
  [  384.005259] pc : smp_call_function_many_cond+0x1b4/0x4b4
  [  384.005683] lr : smp_call_function_many_cond+0x1d0/0x4b4
  [  384.006108] sp : 889a3a70
  [  384.006381] x29: 889a3a70 x28: 0003 x27: 
00056d1fafa0
  [  384.006954] x26: 00056d1d76c8 x25: c87cf18bdd10 x24: 
0003
  [  384.007527] x23: 0001 x22: 00056d1d76c8 x21: 
c87cf18c2690
  [  384.008086] x20: 00056d1fafa0 x19: 00056d1d76c0 x18: 
80