1.
Hi,
2.
I've installed ceph mon and mgr as cluster on 3 nodes,
3.
I have this errors on status and log of standby mgr:
ubuntu 20.04
ceph version: 15.2.5
1.
● ceph-mgr@dev13.service - Ceph cluster manager daemon
2.
Loaded: loaded (/lib/systemd/system/ceph-mgr@.service; enabled; v
Yeah, I forgot to mention this. Our HDD OSDs are simplest set-up, WAB, DB,
BLOCK all collocated on the HDD. My plan for the future is to use dm-cache for
LVM OSDs instead of WAL/DB device. Then I might also see some more CPU
utilisation with small-file I/O. From the question and the suggested pe
I found this here:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/configuration_guide/osd_configuration_reference#operations
. Nothing in the ceph docs. It would be interesting to know what a shard is
and what it does. Can anyone shed a bit of light on this?
Thanks!
=
My plan is to use at least 500GB NVMe per HDD OSD. I have not started that yet,
but there are threads of other people sharing their experience. If you go
beyond 300GB per OSD, apparently the WAL/DB options cannot really use the extra
capacity. With dm-cache or the like you would additionally sta
Hello,
maybe I missed the announcement but why is the documentation of the
older ceph version not accessible anymore on docs.ceph.com?
Best,
Martin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.i