** Description changed:

  See
  https://canonical.lightning.force.com/lightning/r/Case/5004K000009WBzrQAG/view
  for more info
  
  [Impact]
- Using nvme hardware that uses swiotlb in confidential VMs can encounter 
hardware read/write errors.
- 
+ Using nvme hardware that uses in confidential VMs can encounter hardware 
read/write errors.
  
  [Fix]
  
  The following upstream patches address this:
  
  3d2d861eb03e nvme-pci: set min_align_mask
  1f221a0d0dbf swiotlb: respect min_align_mask
  16fc3cef33a0 swiotlb: don't modify orig_addr in swiotlb_tbl_sync_single
  26a7e094783d swiotlb: refactor swiotlb_tbl_map_single
  ca10d0f8e530 swiotlb: clean up swiotlb_tbl_unmap_single
  c32a77fd1878 swiotlb: factor out a nr_slots helper
  c7fbeca757fe swiotlb: factor out an io_tlb_offset helper
  b5d7ccb7aac3 swiotlb: add a IO_TLB_SIZE define
  
- 
  [Test]
  
  Using a confidential VM, with 'swiotlb=force' set on the kernel command
- line, and an additional swiotlb nvme device attached:
+ line, and an additional nvme device attached:
  
- $ sudo mkfs.xfs -f /dev/nvme2n1                                
- meta-data=/dev/nvme2n1           isize=512    agcount=4, agsize=131072 blks   
  
-          =                       sectsz=512   attr=2, projid32bit=1           
  
-          =                       crc=1        finobt=1, sparse=0, rmapbt=0, 
refl
- ink=0                                                                         
  
- data     =                       bsize=4096   blocks=524288, imaxpct=25       
  
-          =                       sunit=0      swidth=0 blks                   
  
- naming   =version 2              bsize=4096   ascii-ci=0 ftype=1              
  
- log      =internal log           bsize=4096   blocks=2560, version=2          
  
-          =                       sectsz=512   sunit=0 blks, lazy-count=1      
  
- realtime =none                   extsz=4096   blocks=0, rtextents=0           
  
+ $ sudo mkfs.xfs -f /dev/nvme2n1
+ meta-data=/dev/nvme2n1           isize=512    agcount=4, agsize=131072 blks
+          =                       sectsz=512   attr=2, projid32bit=1
+          =                       crc=1        finobt=1, sparse=0, rmapbt=0, 
refl
+ ink=0
+ data     =                       bsize=4096   blocks=524288, imaxpct=25
+          =                       sunit=0      swidth=0 blks
+ naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
+ log      =internal log           bsize=4096   blocks=2560, version=2
+          =                       sectsz=512   sunit=0 blks, lazy-count=1
+ realtime =none                   extsz=4096   blocks=0, rtextents=0
  mkfs.xfs: pwrite failed: Input/output error
- 
  
  Note the input/output error
  
  The error no longer happens with the fixes applied.
  
+ 
  [Regression Potential]
+ 
+ Low risk as the patches are mostly clean-up and refactor.
+ Regression in swiotlb could cause hardware read/write errors

** Also affects: linux-gcp (Ubuntu)
   Importance: Undecided
       Status: New

** No longer affects: linux-oracle (Ubuntu)

** Also affects: linux-gcp (Ubuntu Focal)
   Importance: Undecided
       Status: New

** Also affects: linux-gcp (Ubuntu Bionic)
   Importance: Undecided
       Status: New

** Also affects: linux-gcp-5.4 (Ubuntu)
   Importance: Undecided
       Status: New

** No longer affects: linux-gcp-5.4 (Ubuntu Focal)

** No longer affects: linux-gcp (Ubuntu Bionic)

** Changed in: linux-gcp (Ubuntu)
     Assignee: (unassigned) => Khaled El Mously (kmously)

** Changed in: linux-gcp-5.4 (Ubuntu)
     Assignee: (unassigned) => Khaled El Mously (kmously)

** Changed in: linux-gcp (Ubuntu Focal)
     Assignee: (unassigned) => Khaled El Mously (kmously)

** Changed in: linux-gcp-5.4 (Ubuntu Bionic)
     Assignee: (unassigned) => Khaled El Mously (kmously)

** Description changed:

  See
  https://canonical.lightning.force.com/lightning/r/Case/5004K000009WBzrQAG/view
  for more info
  
  [Impact]
- Using nvme hardware that uses in confidential VMs can encounter hardware 
read/write errors.
+ Using nvme with swiotlb in confidential VMs can encounter hardware read/write 
errors.
  
  [Fix]
  
  The following upstream patches address this:
  
  3d2d861eb03e nvme-pci: set min_align_mask
  1f221a0d0dbf swiotlb: respect min_align_mask
  16fc3cef33a0 swiotlb: don't modify orig_addr in swiotlb_tbl_sync_single
  26a7e094783d swiotlb: refactor swiotlb_tbl_map_single
  ca10d0f8e530 swiotlb: clean up swiotlb_tbl_unmap_single
  c32a77fd1878 swiotlb: factor out a nr_slots helper
  c7fbeca757fe swiotlb: factor out an io_tlb_offset helper
  b5d7ccb7aac3 swiotlb: add a IO_TLB_SIZE define
  
  [Test]
  
  Using a confidential VM, with 'swiotlb=force' set on the kernel command
  line, and an additional nvme device attached:
  
  $ sudo mkfs.xfs -f /dev/nvme2n1
  meta-data=/dev/nvme2n1           isize=512    agcount=4, agsize=131072 blks
           =                       sectsz=512   attr=2, projid32bit=1
           =                       crc=1        finobt=1, sparse=0, rmapbt=0, 
refl
  ink=0
  data     =                       bsize=4096   blocks=524288, imaxpct=25
           =                       sunit=0      swidth=0 blks
  naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
  log      =internal log           bsize=4096   blocks=2560, version=2
           =                       sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none                   extsz=4096   blocks=0, rtextents=0
  mkfs.xfs: pwrite failed: Input/output error
  
  Note the input/output error
  
  The error no longer happens with the fixes applied.
  
- 
  [Regression Potential]
  
  Low risk as the patches are mostly clean-up and refactor.
  Regression in swiotlb could cause hardware read/write errors

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-oracle in Ubuntu.
https://bugs.launchpad.net/bugs/1943902

Title:
  NVME errors in confidential vms

Status in linux-gcp package in Ubuntu:
  New
Status in linux-gcp-5.4 package in Ubuntu:
  New
Status in linux-gcp-5.4 source package in Bionic:
  New
Status in linux-gcp source package in Focal:
  New

Bug description:
  See
  https://canonical.lightning.force.com/lightning/r/Case/5004K000009WBzrQAG/view
  for more info

  [Impact]
  Using nvme with swiotlb in confidential VMs can encounter hardware read/write 
errors.

  [Fix]

  The following upstream patches address this:

  3d2d861eb03e nvme-pci: set min_align_mask
  1f221a0d0dbf swiotlb: respect min_align_mask
  16fc3cef33a0 swiotlb: don't modify orig_addr in swiotlb_tbl_sync_single
  26a7e094783d swiotlb: refactor swiotlb_tbl_map_single
  ca10d0f8e530 swiotlb: clean up swiotlb_tbl_unmap_single
  c32a77fd1878 swiotlb: factor out a nr_slots helper
  c7fbeca757fe swiotlb: factor out an io_tlb_offset helper
  b5d7ccb7aac3 swiotlb: add a IO_TLB_SIZE define

  [Test]

  Using a confidential VM, with 'swiotlb=force' set on the kernel
  command line, and an additional nvme device attached:

  $ sudo mkfs.xfs -f /dev/nvme2n1
  meta-data=/dev/nvme2n1           isize=512    agcount=4, agsize=131072 blks
           =                       sectsz=512   attr=2, projid32bit=1
           =                       crc=1        finobt=1, sparse=0, rmapbt=0, 
refl
  ink=0
  data     =                       bsize=4096   blocks=524288, imaxpct=25
           =                       sunit=0      swidth=0 blks
  naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
  log      =internal log           bsize=4096   blocks=2560, version=2
           =                       sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none                   extsz=4096   blocks=0, rtextents=0
  mkfs.xfs: pwrite failed: Input/output error

  Note the input/output error

  The error no longer happens with the fixes applied.

  [Regression Potential]

  Low risk as the patches are mostly clean-up and refactor.
  Regression in swiotlb could cause hardware read/write errors

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-gcp/+bug/1943902/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to