This issue exists only for 4.15.0-2045, passed in recent cycles. Mark
this as fix-released.

** Changed in: ubuntu-kernel-tests
       Status: New => Fix Released

-- 
You received this bug notification because you are a member of Canonical
Platform QA Team, which is subscribed to ubuntu-kernel-tests.
https://bugs.launchpad.net/bugs/1991519

Title:
  api_test.py in ubuntu_lxc cause Call trace on B-gcp-fips
  n1-standard-64

Status in ubuntu-kernel-tests:
  Fix Released

Bug description:
  Issue found on B-gcp-fips 4.15.0-2045.50 with instance n1-standard-64

  This issue can be found on other instances as well, but only 100%
  reproducible on this node (6 out of 6 attempts)

  This test will pass, but it will cause call trace in the dmesg.

  $ sudo python3 /usr/share/doc/python3-lxc/examples/api_test.py
  Getting instance for '48241ae0-4302-11ed-9fcd-42010af0000f'
  Creating rootfs using 'download', arch=amd64
  Using image from local cache
  Unpacking the rootfs

  ---
  You just created an Ubuntu xenial amd64 (20220930_07:42) container.

  To enable SSH, run: apt install openssh-server
  No default root or user password are set by LXC.
  Testing the configuration
  Testing the networking
  Starting the container
  Getting the interface names
  Getting the IP addresses
  eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
          inet 10.0.3.12  netmask 255.255.255.0  broadcast 10.0.3.255
          inet6 fe80::70cb:69ff:fe49:f793  prefixlen 64  scopeid 0x20<link>
          ether 72:cb:69:49:f7:93  txqueuelen 1000  (Ethernet)
          RX packets 19  bytes 2260 (2.2 KB)
          RX errors 0  dropped 0  overruns 0  frame 0
          TX packets 9  bytes 1551 (1.5 KB)
          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  Testing cgroup API
  Freezing the container
  Unfreezing the container
  Shutting down the container
  Snapshotting the container
  Cloning the container as '48241ae1-4302-11ed-9fcd-42010af0000f'
  Renaming the clone to '48241ae2-4302-11ed-9fcd-42010af0000f'
  Destroying the container
  $ echo $?
  0

  dmesg:
  [  169.784125] lxcbr0: port 1(veth74VNXT) entered blocking state
  [  169.784129] lxcbr0: port 1(veth74VNXT) entered disabled state
  [  169.784280] device veth74VNXT entered promiscuous mode
  [  169.784363] IPv6: ADDRCONF(NETDEV_UP): veth74VNXT: link is not ready
  [  169.794281] cgroup: cgroup: disabling cgroup2 socket matching due to 
net_prio or net_cls activation
  [  169.844666] eth0: renamed from vethKQTAXN
  [  170.006559] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
  [  170.006578] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
  [  170.006615] IPv6: ADDRCONF(NETDEV_CHANGE): veth74VNXT: link becomes ready
  [  170.006664] lxcbr0: port 1(veth74VNXT) entered blocking state
  [  170.006666] lxcbr0: port 1(veth74VNXT) entered forwarding state
  [  170.006712] IPv6: ADDRCONF(NETDEV_CHANGE): lxcbr0: link becomes ready
  [  176.060702] lxcbr0: port 1(veth74VNXT) entered disabled state
  [  176.074994] systemctl invoked oom-killer: gfp_mask=0x14000c0(GFP_KERNEL), 
nodemask=(null), order=0, oom_score_adj=0
  [  176.074996] systemctl cpuset=48241ae0-4302-11ed-9fcd-42010af0000f 
mems_allowed=0
  [  176.075000] CPU: 7 PID: 3415 Comm: systemctl Not tainted 
4.15.0-2045-gcp-fips #50-Ubuntu
  [  176.075001] Hardware name: Google Google Compute Engine/Google Compute 
Engine, BIOS Google 08/26/2022
  [  176.075002] Call Trace:
  [  176.075013]  dump_stack+0x98/0xd2
  [  176.075015]  dump_header+0x71/0x282
  [  176.075018]  oom_kill_process+0x21f/0x420
  [  176.075020]  out_of_memory+0x116/0x4e0
  [  176.075024]  mem_cgroup_out_of_memory+0xbb/0xd0
  [  176.075026]  mem_cgroup_oom_synchronize+0x2e8/0x320
  [  176.075028]  ? mem_cgroup_css_reset+0xe0/0xe0
  [  176.075029]  pagefault_out_of_memory+0x13/0x60
  [  176.075033]  mm_fault_error+0x90/0x180
  [  176.075034]  __do_page_fault+0x479/0x4c0
  [  176.075036]  do_page_fault+0x2e/0xe0
  [  176.075040]  ? page_fault+0x2f/0x50
  [  176.075041]  page_fault+0x45/0x50
  [  176.075043] RIP: 0033:0x7f0add113c04
  [  176.075044] RSP: 002b:00007ffd7e5b4be8 EFLAGS: 00010202
  [  176.075045] RAX: 00007f0adbd6d1f0 RBX: 0000000000000003 RCX: 
00007f0adbd6d230
  [  176.075046] RDX: 0000000000000048 RSI: 0000000000000000 RDI: 
00007f0adbd6d1f0
  [  176.075047] RBP: 00007ffd7e5b4e90 R08: 00007f0adbd6d238 R09: 
0000000000012000
  [  176.075048] R10: 00007ffd7e5b4c20 R11: 00007f0adbd6d238 R12: 
00007f0add314aa8
  [  176.075048] R13: 00007ffd7e5b4f78 R14: 0000000000000002 R15: 
0000000000000801
  [  176.075050] Task in /lxc/48241ae0-4302-11ed-9fcd-42010af0000f killed as a 
result of limit of /lxc/48241ae0-4302-11ed-9fcd-42010af0000f
  [  176.075054] memory: usage 33376kB, limit 33376kB, failcnt 108
  [  176.075054] memory+swap: usage 0kB, limit 9007199254740988kB, failcnt 0
  [  176.075055] kmem: usage 20512kB, limit 9007199254740988kB, failcnt 0
  [  176.075055] Memory cgroup stats for 
/lxc/48241ae0-4302-11ed-9fcd-42010af0000f: cache:8184KB rss:4064KB rss_huge:0KB 
shmem:8184KB mapped_file:3828KB dirty:0KB writeback:0KB inactive_anon:8280KB 
active_anon:4300KB inactive_file:16KB active_file:44KB unevictable:0KB
  [  176.075061] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents 
oom_score_adj name
  [  176.075163] [ 3122]     0  3122     9307     1288   118784        0        
     0 systemd
  [  176.075165] [ 3190]     0  3190     8818     1623   110592        0        
     0 systemd-journal
  [  176.075166] [ 3260]     0  3260     1126      437    53248        0        
     0 ondemand
  [  176.075168] [ 3270]     0  3270     1822      162    61440        0        
     0 sleep
  [  176.075169] [ 3403]     0  3403     6249      336    94208        0        
     0 umount
  [  176.075170] [ 3404]     0  3404     6249      319    94208        0        
     0 umount
  [  176.075172] [ 3405]     0  3405     6249      297    94208        0        
     0 umount
  [  176.075173] [ 3406]     0  3406     6249      298    94208        0        
     0 umount
  [  176.075175] [ 3407]     0  3407     6249      288    90112        0        
     0 umount
  [  176.075176] [ 3408]     0  3408     6249      336    94208        0        
     0 umount
  [  176.075178] [ 3409]     0  3409     6249      314    90112        0        
     0 umount
  [  176.075180] [ 3411]     0  3411     6249      314    90112        0        
     0 umount
  [  176.075181] [ 3412]     0  3412     6249      297    98304        0        
     0 umount
  [  176.075182] [ 3413]     0  3413     6249      298    98304        0        
     0 umount
  [  176.075183] [ 3414]     0  3414     6249      336    90112        0        
     0 umount
  [  176.075184] [ 3415]     0  3415     5781      220    73728        0        
     0 systemctl
  [  176.075186] [ 3416]     0  3416     9307      307   106496        0        
     0 (umount)
  [  176.075187] Memory cgroup out of memory: Kill process 3190 
(systemd-journal) score 191 or sacrifice child
  [  176.076186] SLUB: Unable to allocate memory on node -1, 
gfp=0x14000c0(GFP_KERNEL)
  [  176.085069]   cache: 
proc_inode_cache(1511:48241ae0-4302-11ed-9fcd-42010af0000f), object size: 680, 
buffer size: 688, default order: 3, min order: 0
  [  176.085070]   node 0: slabs: 43, objs: 2021, free: 0
  [  176.085113] Killed process 3190 (systemd-journal) total-vm:35272kB, 
anon-rss:280kB, file-rss:2332kB, shmem-rss:3880kB
  [  176.711070] lxcbr0: port 1(veth74VNXT) entered disabled state
  [  176.714952] device veth74VNXT left promiscuous mode
  [  176.714962] lxcbr0: port 1(veth74VNXT) entered disabled state
  [  189.145063] lxcbr0: port 1(veth6V9JPE) entered blocking state
  [  189.145066] lxcbr0: port 1(veth6V9JPE) entered disabled state
  [  189.145131] device veth6V9JPE entered promiscuous mode
  [  189.145209] IPv6: ADDRCONF(NETDEV_UP): veth6V9JPE: link is not ready
  [  189.195296] eth0: renamed from vethSAON3V
  [  189.826113] lxcbr0: port 1(veth6V9JPE) entered disabled state
  [  189.830664] device veth6V9JPE left promiscuous mode
  [  189.830672] lxcbr0: port 1(veth6V9JPE) entered disabled state

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-kernel-tests/+bug/1991519/+subscriptions


-- 
Mailing list: https://launchpad.net/~canonical-ubuntu-qa
Post to     : canonical-ubuntu-qa@lists.launchpad.net
Unsubscribe : https://launchpad.net/~canonical-ubuntu-qa
More help   : https://help.launchpad.net/ListHelp

Reply via email to