Hello all.

After a disk changed, we see that cephadm does not recreate the OSD. Going all 
back to pvs command I ended up on this issue: 
https://tracker.ceph.com/issues/62862 and this PR: 
https://github.com/ceph/ceph/pull/53500. The PR is unfortunately closed.

Is this a non-bug? I tries to replicate this pvs issue on various OS. And it 
looks like to be the good behavior at least on PVS side, if a lv was deleted in 
the middle of the disk.

Example with a simple vg with 5 lvs were lv 3 was deleted.
```
root@debian:~# pvs --readonly -o pv_name,vg_name,lv_name
PV VG LV
/dev/vdb newvg lv1
/dev/vdb newvg lv2
/dev/vdb newvg
/dev/vdb newvg lv4
/dev/vdb newvg lv5  /dev/vdb newvg
```

This output can seem weird, but if we expand the output lables
```
root@debian:~# pvs --segments -o+lv_name,seg_start_pe,segtype
PV VG Fmt Attr PSize PFree Start SSize LV Start Type
/dev/vdb newvg lvm2 a-- <60.00g <20.00g 0 2560 lv1 0 linear
/dev/vdb newvg lvm2 a-- <60.00g <20.00g 2560 2560 lv2 0 linear
/dev/vdb newvg lvm2 a-- <60.00g <20.00g 5120 2560 0 free
/dev/vdb newvg lvm2 a-- <60.00g <20.00g 7680 2560 lv4 0 linear
/dev/vdb newvg lvm2 a-- <60.00g <20.00g 10240 2560 lv5 0 linear /dev/vdb newvg 
lvm2 a-- <60.00g <20.00g 12800 2559 0 free
```

Anyway, this seems to cause the duplication.

Will someone have a look into this issue? Or should we look into a workaround?

Thanks,
Luis
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to