** Changed in: ceph (Ubuntu Bionic)
       Status: New => In Progress

** Changed in: ceph (Ubuntu Disco)
       Status: New => In Progress

** Changed in: ceph (Ubuntu Disco)
     Assignee: (unassigned) => James Page (james-page)

** Changed in: ceph (Ubuntu Bionic)
     Assignee: (unassigned) => James Page (james-page)

** Changed in: ceph (Ubuntu Bionic)
   Importance: Undecided => High

** Changed in: ceph (Ubuntu Disco)
   Importance: Undecided => High

** Description changed:

+ [Impact]
+ For deployments where the bluestore DB and WAL devices are on separate 
underlying OSD's, its possible on reboot that the LV's configured on these 
devices have not yet been scanned and detected; the OSD boot process ignores 
this fact and tries to boot the OSD anyway as soon as the primary LV supporting 
the OSD is detected, resulting in the OSD crashing as required block device 
symlinks are not present.
+ 
+ [Test Case]
+ Deploy ceph with bluestore + separate DB and WAL devices.
+ Reboot servers
+ OSD devices will fail to start after reboot (its a race so not always).
+ 
+ [Regression Potential]
+ Low - the fix has been landed upstream and simple ensures that if a separate 
LV is expected for the DB and WAL devices for an OSD, the OSD will not try to 
boot until they are present.
+ 
+ [Original Bug Report]
  Ubuntu 18.04.2 Ceph deployment.
  
  Ceph OSD devices utilizing LVM volumes pointing to udev-based physical 
devices.
  LVM module is supposed to create PVs from devices using the links in 
/dev/disk/by-dname/ folder that are created by udev.
  However on reboot it happens (not always, rather like race condition) that 
Ceph services cannot start, and pvdisplay doesn't show any volumes created. The 
folder /dev/disk/by-dname/ however has all necessary device created by the end 
of boot process.
  
  The behaviour can be fixed manually by running "#/sbin/lvm pvscan
  --cache --activate ay /dev/nvme0n1" command for re-activating the LVM
  components and then the services can be started.

** Also affects: cloud-archive
   Importance: Undecided
       Status: New

** Also affects: cloud-archive/stein
   Importance: Undecided
       Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
       Status: New

** Also affects: cloud-archive/pike
   Importance: Undecided
       Status: New

** Also affects: cloud-archive/train
   Importance: Undecided
       Status: New

** Also affects: cloud-archive/rocky
   Importance: Undecided
       Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828617

Title:
  Hosts randomly 'losing' disks, breaking ceph-osd service enumeration

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1828617/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to