Hi,
Wondering if there is librbd supporting Python asyncio,
or any plan to do that?
Thanks!
Tony
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello,
I've tried to add to CEPH cluster OSD node with a 12 rotational
disks and 1 NVMe. My YAML was this:
service_type: osd
service_id: osd_spec_default
service_name: osd.osd_spec_default
placement:
host_pattern: osd8
spec:
block_db_size: 64G
data_devices:
rotational: 1
db_devices:
Hi,
if you don't specify a different device for WAL it will be
automatically colocated on the same device as the DB. So you're good
with this configuration.
Regards,
Eugen
Zitat von Jan Marek :
Hello,
I've tried to add to CEPH cluster OSD node with a 12 rotational
disks and 1 NVMe. My
Hello,
I have a cluster, which have this configuration:
osd pool default size = 3
osd pool default min size = 1
I have 5 monitor nodes and 7 OSD nodes.
I have changed a crush map to divide ceph cluster to two
datacenters - in the first one will be a part of cluster with 2
copies of data and in
Hello,
but when I try to list devices config with ceph-volume, I can see
a DB devices, but no WAL devices:
ceph-volume lvm list
== osd.8 ===
[db]
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9
block device
/dev