Add dev to comment.
With 15.2.8, when apply OSD service spec, db_devices is gone.
Here is the service spec file.
==
service_type: osd
service_id: osd-spec
placement:
hosts:
- ceph-osd-1
spec:
objectstore: bluestore
data_devices:
rotational: 1
d
OK, found it. The second line in the error messages actually gives it away:
stderr: 2021-02-06 13:48:27.477 7f46756b4b80 -1 bdev(0x561db199c700
/var/lib/ceph/osd/ceph-342//block) _aio_start io_setup(2) failed with EAGAIN;
try increasing /proc/sys/fs/aio-max-nr
On my system, the default is rath
I just noticed one difference between the two servers:
"Broken" server:
# lvm vgs
Failed to set up async io, using sync io.
VG#PV #LV #SN Attr VSize VFree
[listing follows]
"Good" server:
# lvm vgs
VG#PV #LV
Hi Dave and everyone else affected,
I'm responding to a thread you opened on an issue with lvm OSD creation:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/YYH3VANVV22WGM3CNL4TN4TTL63FCEVD/
https://tracker.ceph.com/issues/43868
Most important question: is there a workaround?
My
> How do you achieve that? 2 hours?
That's a long story. Short one is, by taking a wrong path for trouble shooting.
I should have stayed with my check-list instead. This is the whole point of the
redundancy remark I made, that 1 admin mistake doesn't hurt and you are less
likely to panic if one
> - three servers as reccomended by proxmox (with 10gb ethernet and so on)
> - size=3 and min_size=2 reccomended by Ceph
You forgot the ceph recommendation* to provide sufficient fail-over capacity in
case a failure domain or disk fails. The recommendation would be to have 4
hosts with 25% capac