Dear Cephalopodians,

with the release of ceph-deploy we are thinking about migrating our 
Bluestore-OSDs (currently created with ceph-disk via old ceph-deploy)
to be created via ceph-volume (with LVM). 

I note two major changes:
1. It seems the block.db partitions have to be created beforehand, manually. 
   With ceph-disk, one should not do that - or manually set the correct 
PARTTYPE ID. 
   Will ceph-volume take care of setting the PARTTYPE on existing partitions 
for block.db now? 
   Is it not necessary anymore? 
   Is the config option bluestore_block_db_size now also obsoleted? 

2. Activation does not work via udev anymore, which solves some racy things. 

This second major change makes me curious: How does activation work now? 
In the past, I could reinstall the full OS, install ceph packages, trigger udev 
/ reboot and all OSDs would come back,
without storing any state / activating any services in the OS. 

Does this still work?
Or is there a manual step needed to restore the ceph-osd@ID-UUID services which 
at first glance appear to store state (namely ID and UUID)? 

If that's the case:
- What is this magic manual step? 
- Is it still possible to flip two disks within the same OSD host without 
issues? 
  I would guess so, since the services would detect the disk in the ceph-volume 
trigger phase. 
- Is it still possible to take a disk from one OSD host, and put it in another 
one, or does this now need a manual interaction? 
  With ceph-disk / udev, it did not, since udev triggered disk activation and 
then the service was created at runtime. 

Many thanks for your help and cheers,
        Oliver

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to