On 24-03-2025 08:54, Eugen Block wrote:
Hi Torkil,

Hi Eugen

I feel like this is some kind of corner case with DB devices of different sizes. I'm not really surprised that ceph-volume can't handle that as you would expect. Maybe one of the devs can chime in here. Did you eventually manage to deploy all the OSDs?

We are now unable to deploy any more OSDs. We've tried a number of things but it fails even with just one HDD and any single one of the NVMe's in the service spec, citing that none of the passed fast devices are available. Every LVM command we can think of states there's plenty of room.

It looks to me like 2 issues, one being ceph-volume choking on our multiple different NVMe configuration and something else which blocks the NVMes. I've filed a ticket for this:

https://tracker.ceph.com/issues/70652

"
...
nvme2n1           259:2    0   2.9T  0 disk
├─ceph--9fc0a64d--9ab8--4b12--9a8f--6a48e6c95211-osd--db--d28b3fa5--5ace--49a1--ad3d--fc4d14f1b8db
│                 253:14   0 270.1G  0 lvm
├─ceph--9fc0a64d--9ab8--4b12--9a8f--6a48e6c95211-osd--db--1ddcc845--799a--4d0d--96f1--90078e2cf0cf
│                 253:15   0 270.1G  0 lvm
├─ceph--9fc0a64d--9ab8--4b12--9a8f--6a48e6c95211-osd--db--d4f3c31f--2327--4662--9517--f86dbe35c510
│                 253:17   0 270.1G  0 lvm
└─ceph--9fc0a64d--9ab8--4b12--9a8f--6a48e6c95211-osd--db--eebae904--4ae7--4550--8c3a--0af3ef1fec1c
                  253:18   0 270.1G  0 lvm
nvme0n1           259:3    0   2.9T  0 disk
├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--12f47f4e--a197--4ecf--a021--94b135039661
│                 253:22   0 270.1G  0 lvm
├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--da108348--fc04--4e89--823d--5ebdf26e0408
│                 253:23   0 270.1G  0 lvm
├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--ee274757--51d8--48e3--a41e--b2b321da7170
│                 253:24   0 270.1G  0 lvm
├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--79ce0245--338a--432c--92dd--1437dcaf3917
│                 253:25   0 270.1G  0 lvm
├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--e4e2b277--03c3--47aa--bf32--12f63faee4e5
│                 253:26   0 270.1G  0 lvm
├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--70571884--5e52--45d5--9517--43a4329dec98
│                 253:28   0 270.1G  0 lvm
├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--66e50d61--9bc1--46f5--8a60--83272c98a875
│                 253:29   0 270.1G  0 lvm
└─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--bed0e59b--66d7--4e81--ad7c--dff430690f8e
                  253:31   0 270.1G  0 lvm
nvme1n1           259:6    0   1.5T  0 disk
├─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd--db--36f82df5--953b--450a--b9d7--5e2ba334a0e7
│                 253:37   0 270.1G  0 lvm
├─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd--db--857ec827--ff63--43d4--a4e8--43681ad8229b
│                 253:39   0 270.1G  0 lvm
├─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd--db--b370a498--d3b9--4ddd--b752--ab95e86bc027
│                 253:41   0 270.1G  0 lvm
├─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd--db--5468cc56--54aa--4965--8a32--cf4d6b29fb3a
│                 253:42   0 270.1G  0 lvm
└─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd--db--6d4941f5--96d6--4e67--81f5--77780e1a3ab0
                  253:43   0 270.1G  0 lvm
nvme3n1           259:7    0   1.5T  0 disk
├─ceph--20c0fa86--7668--4139--bf39--f7bb8b2e1623-osd--db--3f4c63c1--e23c--4390--a034--54d4a224b2a2
│                 253:3    0 270.1G  0 lvm
├─ceph--20c0fa86--7668--4139--bf39--f7bb8b2e1623-osd--db--a45b02f5--ad6d--453d--91bc--8a52f1bfa533
│                 253:4    0 270.1G  0 lvm
├─ceph--20c0fa86--7668--4139--bf39--f7bb8b2e1623-osd--db--d34e427d--749f--460f--bc15--db5ab3900a8e
│                 253:5    0 270.1G  0 lvm
├─ceph--20c0fa86--7668--4139--bf39--f7bb8b2e1623-osd--db--2342ed1b--8eec--4fc6--9176--bf5a149d3c30
│                 253:6    0 270.1G  0 lvm
└─ceph--20c0fa86--7668--4139--bf39--f7bb8b2e1623-osd--db--e305cdcf--4a1a--435e--9d61--65c672b3ca5b
                  253:8    0 270.1G  0 lvm

[root@franky ~]# vgs
VG #PV #LV #SN Attr VSize VFree ceph-20c0fa86-7668-4139-bf39-f7bb8b2e1623 1 5 0 wz--n- <1.46t 140.00g ceph-9fc0a64d-9ab8-4b12-9a8f-6a48e6c95211 1 4 0 wz--n- 2.91t <1.86t ceph-c2aeb797-fc4a-4054-9d12-7c5550ac1641 1 8 0 wz--n- 2.91t 820.16g ceph-f1df0c25-920d-4d13-ba48-ad56302a4099 1 5 0 wz--n- <1.46t 140.00g
...

[root@franky ~]# pvs
PV VG Fmt Attr PSize PFree /dev/nvme0n1 ceph-c2aeb797-fc4a-4054-9d12-7c5550ac1641 lvm2 a-- 2.91t 820.16g /dev/nvme1n1 ceph-f1df0c25-920d-4d13-ba48-ad56302a4099 lvm2 a-- <1.46t 140.00g /dev/nvme2n1 ceph-9fc0a64d-9ab8-4b12-9a8f-6a48e6c95211 lvm2 a-- 2.91t <1.86t /dev/nvme3n1 ceph-20c0fa86-7668-4139-bf39-f7bb8b2e1623 lvm2 a-- <1.46t 140.00g
...

[root@franky ~]# lvs | grep db | grep -v block
osd-db-2342ed1b-8eec-4fc6-9176-bf5a149d3c30 ceph-20c0fa86-7668-4139-bf39-f7bb8b2e1623 -wi-ao---- 270.08g osd-db-3f4c63c1-e23c-4390-a034-54d4a224b2a2 ceph-20c0fa86-7668-4139-bf39-f7bb8b2e1623 -wi-ao---- 270.08g osd-db-a45b02f5-ad6d-453d-91bc-8a52f1bfa533 ceph-20c0fa86-7668-4139-bf39-f7bb8b2e1623 -wi-ao---- 270.08g osd-db-d34e427d-749f-460f-bc15-db5ab3900a8e ceph-20c0fa86-7668-4139-bf39-f7bb8b2e1623 -wi-ao---- 270.08g osd-db-e305cdcf-4a1a-435e-9d61-65c672b3ca5b ceph-20c0fa86-7668-4139-bf39-f7bb8b2e1623 -wi-ao---- 270.08g osd-db-1ddcc845-799a-4d0d-96f1-90078e2cf0cf ceph-9fc0a64d-9ab8-4b12-9a8f-6a48e6c95211 -wi-ao---- 270.08g osd-db-d28b3fa5-5ace-49a1-ad3d-fc4d14f1b8db ceph-9fc0a64d-9ab8-4b12-9a8f-6a48e6c95211 -wi-ao---- 270.08g osd-db-d4f3c31f-2327-4662-9517-f86dbe35c510 ceph-9fc0a64d-9ab8-4b12-9a8f-6a48e6c95211 -wi-ao---- 270.08g osd-db-eebae904-4ae7-4550-8c3a-0af3ef1fec1c ceph-9fc0a64d-9ab8-4b12-9a8f-6a48e6c95211 -wi-ao---- 270.08g osd-db-12f47f4e-a197-4ecf-a021-94b135039661 ceph-c2aeb797-fc4a-4054-9d12-7c5550ac1641 -wi-ao---- 270.08g osd-db-66e50d61-9bc1-46f5-8a60-83272c98a875 ceph-c2aeb797-fc4a-4054-9d12-7c5550ac1641 -wi-ao---- 270.08g osd-db-70571884-5e52-45d5-9517-43a4329dec98 ceph-c2aeb797-fc4a-4054-9d12-7c5550ac1641 -wi-ao---- 270.08g osd-db-79ce0245-338a-432c-92dd-1437dcaf3917 ceph-c2aeb797-fc4a-4054-9d12-7c5550ac1641 -wi-ao---- 270.08g osd-db-bed0e59b-66d7-4e81-ad7c-dff430690f8e ceph-c2aeb797-fc4a-4054-9d12-7c5550ac1641 -wi-ao---- 270.08g osd-db-da108348-fc04-4e89-823d-5ebdf26e0408 ceph-c2aeb797-fc4a-4054-9d12-7c5550ac1641 -wi-ao---- 270.08g osd-db-e4e2b277-03c3-47aa-bf32-12f63faee4e5 ceph-c2aeb797-fc4a-4054-9d12-7c5550ac1641 -wi-ao---- 270.08g osd-db-ee274757-51d8-48e3-a41e-b2b321da7170 ceph-c2aeb797-fc4a-4054-9d12-7c5550ac1641 -wi-ao---- 270.08g osd-db-36f82df5-953b-450a-b9d7-5e2ba334a0e7 ceph-f1df0c25-920d-4d13-ba48-ad56302a4099 -wi-ao---- 270.08g osd-db-5468cc56-54aa-4965-8a32-cf4d6b29fb3a ceph-f1df0c25-920d-4d13-ba48-ad56302a4099 -wi-ao---- 270.08g osd-db-6d4941f5-96d6-4e67-81f5-77780e1a3ab0 ceph-f1df0c25-920d-4d13-ba48-ad56302a4099 -wi-ao---- 270.08g osd-db-857ec827-ff63-43d4-a4e8-43681ad8229b ceph-f1df0c25-920d-4d13-ba48-ad56302a4099 -wi-ao---- 270.08g osd-db-b370a498-d3b9-4ddd-b752-ab95e86bc027 ceph-f1df0c25-920d-4d13-ba48-ad56302a4099 -wi-ao---- 270.08g
...
"

Mvh.

Torkil

Zitat von Torkil Svensgaard <tor...@drcmr.dk>:

On 19/03/2025 11:31, Torkil Svensgaard wrote:


On 19/03/2025 08:33, Torkil Svensgaard wrote:
Hi

I am adding HDDs to a replacement server which will fit 34 HDDs and 2 SATA SSDs, and has 4 NVMe devices for DB/WAL.

The orchaetrator now fails to create any more OSDs due to:

"
/usr/bin/podman: stderr --> 270.08 GB was requested for block_db_size, but only 248.40 GB can be fulfilled
"

So I kept adding more HDDs since that has to be done anyway and noticed that the error changed:

/usr/bin/podman: stderr --> 270.08 GB was requested for block_db_size, but only 248.40 GB can be fulfilled

Added 4 drives

/usr/bin/podman: stderr --> 270.08 GB was requested for block_db_size, but only 212.92 GB can be fulfilled

Added 4 drives

/usr/bin/podman: stderr --> 270.08 GB was requested for block_db_size, but only 186.30 GB can be fulfilled

So it looks like ceph-volume is doing some sort of total calculation with the drives and devices passed and thinks there is too little room for some reason.

Workaround seems to be adding the drives one at a time.

Or not, that only worked for a few drives, until every NVMe had 5 db partitions. We can't get the math to fit exactly, but we have this:

"
nvme2n1    259:2    0   2.9T  0 disk
├─ceph--9fc0a64d--9ab8--4b12--9a8f--6a48e6c95211-osd--db--d28b3fa5--5ace--49a1--ad3d--fc4d14f1b8db
│    253:7    0 270.1G  0 lvm
├─ceph--9fc0a64d--9ab8--4b12--9a8f--6a48e6c95211-osd--db--feb3bd0f--858b--45c5--a1ba--c0c77f34dc0d
│    253:15   0 270.1G  0 lvm
├─ceph--9fc0a64d--9ab8--4b12--9a8f--6a48e6c95211-osd--db--1ddcc845--799a--4d0d--96f1--90078e2cf0cf
│    253:21   0 270.1G  0 lvm
├─ceph--9fc0a64d--9ab8--4b12--9a8f--6a48e6c95211-osd--db--d4f3c31f--2327--4662--9517--f86dbe35c510
│    253:27   0 270.1G  0 lvm
└─ceph--9fc0a64d--9ab8--4b12--9a8f--6a48e6c95211-osd--db--eebae904--4ae7--4550--8c3a--0af3ef1fec1c
     253:29   0 270.1G  0 lvm
nvme0n1    259:3    0   2.9T  0 disk
├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--12f47f4e--a197--4ecf--a021--94b135039661
│    253:3    0 270.1G  0 lvm
├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--da108348--fc04--4e89--823d--5ebdf26e0408
│    253:19   0 270.1G  0 lvm
├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--ee274757--51d8--48e3--a41e--b2b321da7170
│    253:25   0 270.1G  0 lvm
├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--79ce0245--338a--432c--92dd--1437dcaf3917
│    253:35   0 270.1G  0 lvm
└─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--e4e2b277--03c3--47aa--bf32--12f63faee4e5
     253:37   0 270.1G  0 lvm
nvme3n1    259:6    0   1.5T  0 disk
├─ceph--20c0fa86--7668--4139--bf39--f7bb8b2e1623-osd--db--3f4c63c1--e23c--4390--a034--54d4a224b2a2
│    253:5    0 270.1G  0 lvm
├─ceph--20c0fa86--7668--4139--bf39--f7bb8b2e1623-osd--db--a45b02f5--ad6d--453d--91bc--8a52f1bfa533
│    253:13   0 270.1G  0 lvm
├─ceph--20c0fa86--7668--4139--bf39--f7bb8b2e1623-osd--db--d34e427d--749f--460f--bc15--db5ab3900a8e
│    253:39   0 270.1G  0 lvm
├─ceph--20c0fa86--7668--4139--bf39--f7bb8b2e1623-osd--db--2342ed1b--8eec--4fc6--9176--bf5a149d3c30
│    253:41   0 270.1G  0 lvm
└─ceph--20c0fa86--7668--4139--bf39--f7bb8b2e1623-osd--db--e305cdcf--4a1a--435e--9d61--65c672b3ca5b
     253:43   0 270.1G  0 lvm
nvme1n1    259:7    0   1.5T  0 disk
├─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd--db--36f82df5--953b--450a--b9d7--5e2ba334a0e7
│    253:9    0 270.1G  0 lvm
├─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd--db--857ec827--ff63--43d4--a4e8--43681ad8229b
│    253:17   0 270.1G  0 lvm
├─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd--db--b370a498--d3b9--4ddd--b752--ab95e86bc027
│    253:23   0 270.1G  0 lvm
├─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd--db--5468cc56--54aa--4965--8a32--cf4d6b29fb3a
│    253:31   0 270.1G  0 lvm
└─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd--db--6d4941f5--96d6--4e67--81f5--77780e1a3ab0
     253:33   0 270.1G  0 lvm
"

The two 1.5T NVMe do not have room for any more 270G db partitions, but the two 2.9T ones have plenty of room.

The error is similar to what I had originally, so I think ceph-volume is simply trying to use one of the small NVMes and not the bigger ones with free space.

"
2025-03-19T11:19:00.031007+0000 mgr.ceph-flash1.erhakb [ERR] Failed to apply osd.slow spec DriveGroupSpec.from_json(yaml.safe_load('''service_type: osd
service_id: slow
service_name: osd.slow
placement:
  host_pattern: '*'
spec:
  block_db_size: 290000000000
  data_devices:
    rotational: 1
  db_devices:
    rotational: 0
    size: 1000G:7000G
  filter_logic: AND
  objectstore: bluestore
''')): cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/config/ceph.conf Non-zero exit code 1 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --authfile=/etc/ceph/podman-auth.json --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de -e NODE_NAME=franky -e CEPH_VOLUME_OSDSPEC_AFFINITY=slow -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d:/var/run/ceph:z -v /var/log/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d:/var/log/ceph:z -v /var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/selinux:/sys/fs/selinux:ro -v /:/rootfs:rslave -v /etc/hosts:/etc/hosts:ro -v /tmp/ceph-tmpc2zygv3o:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpk9buf727:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de lvm batch --no-auto /dev/sda /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 --block-db-size 290000000000 --yes --no-systemd
/usr/bin/podman: stderr --> passed data devices: 21 physical, 0 LVM
/usr/bin/podman: stderr --> relative data size: 1.0
/usr/bin/podman: stderr --> passed block_db devices: 4 physical, 0 LVM
/usr/bin/podman: stderr --> 270.08 GB was requested for block_db_size, but only 248.40 GB can be fulfilled
Traceback (most recent call last):
  File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 5581, in <module>   File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 5569, in main   File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 409, in _infer_config   File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 324, in _infer_fsid   File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 437, in _infer_image   File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 311, in _validate_fsid   File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 3314, in command_ceph_volume   File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/cephadmlib/call_wrappers.py", line 310, in call_throws RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --authfile=/etc/ceph/podman-auth.json --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de -e NODE_NAME=franky -e CEPH_VOLUME_OSDSPEC_AFFINITY=slow -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d:/var/run/ceph:z -v /var/log/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d:/var/log/ceph:z -v /var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/selinux:/sys/fs/selinux:ro -v /:/rootfs:rslave -v /etc/hosts:/etc/hosts:ro -v /tmp/ceph-tmpc2zygv3o:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpk9buf727:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de lvm batch --no-auto /dev/sda /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 --block-db-size 290000000000 --yes --no-systemd

"

Mvh.

Torkil

Mvh.

Torkil

It looks to me like 1 of the 4 NVMe devices is indeed too full to fit another DB/WAL partition but ceph-volume is being passed all 4 devices so it should just pick another? Also, it has been creating DB/WAL partitions across all 4 devices up til now, so it's not like it only looks at the first device passed.

Suggestions?

"
2025-03-19T07:20:11.898279+0000 mgr.ceph-flash1.erhakb [INF] Detected new or changed devices on franky 2025-03-19T07:21:31.960784+0000 mgr.ceph-flash1.erhakb [ERR] Failed to apply osd.slow spec DriveGroupSpec.from_json(yaml.safe_load('''service_type: osd
service_id: slow
service_name: osd.slow
placement:
   host_pattern: '*'
spec:
   block_db_size: 290000000000
   data_devices:
     rotational: 1
   db_devices:
     rotational: 0
     size: 1000G:7000G
   filter_logic: AND
   objectstore: bluestore
''')): cephadm exited with an error code: 1, stderr:Inferring config / var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/config/ceph.conf Non-zero exit code 1 from /usr/bin/podman run --rm --ipc=host --stop- signal=SIGTERM --authfile=/etc/ceph/podman-auth.json --net=host -- entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de -e NODE_NAME=franky -e CEPH_VOLUME_OSDSPEC_AFFINITY=slow -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d:/var/run/ceph:z -v /var/log/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d:/var/log/ceph:z -v /var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/selinux:/sys/fs/selinux:ro -v /:/rootfs:rslave -v /etc/hosts:/etc/hosts:ro -v /tmp/ceph-tmpt7wifcjq:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmphx5nhdsi:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de lvm batch --no-auto /dev/sda /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 --block-db-size 290000000000 --yes --no-systemd
/usr/bin/podman: stderr --> passed data devices: 21 physical, 0 LVM
/usr/bin/podman: stderr --> relative data size: 1.0
/usr/bin/podman: stderr --> passed block_db devices: 4 physical, 0 LVM
/usr/bin/podman: stderr --> 270.08 GB was requested for block_db_size, but only 248.40 GB can be fulfilled
Traceback (most recent call last):
   File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
     return _run_code(code, main_globals, None,
   File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
     exec(code, run_globals)
   File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 5581, in <module>    File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 5569, in main    File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 409, in _infer_config    File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 324, in _infer_fsid    File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 437, in _infer_image    File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 311, in _validate_fsid    File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 3314, in command_ceph_volume    File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/cephadmlib/call_wrappers.py", line 310, in call_throws RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host -- stop-signal=SIGTERM --authfile=/etc/ceph/podman-auth.json --net=host -- entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk -- init -e CONTAINER_IMAGE=quay.io/ceph/ ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de -e NODE_NAME=franky -e CEPH_VOLUME_OSDSPEC_AFFINITY=slow -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d:/var/run/ceph:z -v /var/log/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d:/var/log/ceph:z -v /var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/selinux:/sys/fs/selinux:ro -v /:/rootfs:rslave -v /etc/hosts:/etc/hosts:ro -v /tmp/ceph-tmpt7wifcjq:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmphx5nhdsi:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de lvm batch --no-auto /dev/sda /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 --block-db-size 290000000000 --yes --no-systemd
Traceback (most recent call last):
   File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services
     if self._apply_service(spec):
   File "/usr/share/ceph/mgr/cephadm/serve.py", line 721, in _apply_service
     self.mgr.osd_service.create_from_spec(cast(DriveGroupSpec, spec))
   File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 79, in create_from_spec
     ret = self.mgr.wait_async(all_hosts())
   File "/usr/share/ceph/mgr/cephadm/module.py", line 815, in wait_async
     return self.event_loop.get_result(coro, timeout)
   File "/usr/share/ceph/mgr/cephadm/ssh.py", line 136, in get_result
     return future.result(timeout)
   File "/lib64/python3.9/concurrent/futures/_base.py", line 446, in result
     return self.__get_result()
   File "/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
     raise self._exception
   File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 76, in all_hosts
     return await gather(*futures)
   File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 63, in create_from_spec_one
     ret_msg = await self.create_single_host(
   File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 95, in create_single_host
     raise RuntimeError(
RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/config/ ceph.conf Non-zero exit code 1 from /usr/bin/podman run --rm --ipc=host --stop- signal=SIGTERM --authfile=/etc/ceph/podman-auth.json --net=host -- entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de -e NODE_NAME=franky -e CEPH_VOLUME_OSDSPEC_AFFINITY=slow -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d:/var/run/ceph:z -v /var/log/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d:/var/log/ceph:z -v /var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/selinux:/sys/fs/selinux:ro -v /:/rootfs:rslave -v /etc/hosts:/etc/hosts:ro -v /tmp/ceph-tmpt7wifcjq:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmphx5nhdsi:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de lvm batch --no-auto /dev/sda /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 --block-db-size 290000000000 --yes --no-systemd
/usr/bin/podman: stderr --> passed data devices: 21 physical, 0 LVM
/usr/bin/podman: stderr --> relative data size: 1.0
/usr/bin/podman: stderr --> passed block_db devices: 4 physical, 0 LVM
/usr/bin/podman: stderr --> 270.08 GB was requested for block_db_size, but only 248.40 GB can be fulfilled
Traceback (most recent call last):
   File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
     return _run_code(code, main_globals, None,
   File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
     exec(code, run_globals)
   File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 5581, in <module>    File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 5569, in main    File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 409, in _infer_config    File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 324, in _infer_fsid    File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 437, in _infer_image    File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 311, in _validate_fsid    File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/__main__.py", line 3314, in command_ceph_volume    File "/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/ cephadm.c6d8d2eb72a60267f2844dd3167700619f1207413db7701f1827abf652e86a11/cephadmlib/call_wrappers.py", line 310, in call_throws RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host -- stop-signal=SIGTERM --authfile=/etc/ceph/podman-auth.json --net=host -- entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk -- init -e CONTAINER_IMAGE=quay.io/ceph/ ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de -e NODE_NAME=franky -e CEPH_VOLUME_OSDSPEC_AFFINITY=slow -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d:/var/run/ceph:z -v /var/log/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d:/var/log/ceph:z -v /var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/selinux:/sys/fs/selinux:ro -v /:/rootfs:rslave -v /etc/hosts:/etc/hosts:ro -v /tmp/ceph-tmpt7wifcjq:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmphx5nhdsi:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de lvm batch --no-auto /dev/sda /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 --block-db-size 290000000000 --yes --no-systemd
"

"
[root@franky ~]# lsblk
NAME                                                  MAJ:MIN RM SIZE RO TYPE  MOUNTPOINTS sda                                                     8:0    0 2.7T 0 disk └─ceph--6c2f1506--e2b6--4c4a--a808--e6b74363010f-osd-- block--76e673e9-- d0c0--46bb--8c3c--e666ac5cef3b                                                        253:2    0  2.7T   0 lvm sdb                                                     8:16   0 223.6G 0 disk ├─sdb1                                                  8:17   0 1000M 0 part │ └─md0                                                 9:0    0 999M 0 raid1 /boot ├─sdb2                                                  8:18   0 512M 0 part │ └─md126                                               9:126  0 511.9M 0 raid1 /boot/efi └─sdb3                                                  8:19   0 222.1G 0 part    └─md127                                               9:127  0  222G   0 raid1      ├─rhel-root                                       253:0    0  221G   0 lvm   /var/lib/containers/storage/overlay
     │          /
     └─rhel-swap                                       253:1    0    1G   0 lvm sdc                                                     8:32   0 223.6G 0 disk ├─sdc1                                                  8:33   0 1000M 0 part │ └─md0                                                 9:0    0 999M 0 raid1 /boot ├─sdc2                                                  8:34   0 512M 0 part │ └─md126                                               9:126  0 511.9M 0 raid1 /boot/efi └─sdc3                                                  8:35   0 222.1G 0 part    └─md127                                               9:127  0  222G   0 raid1      ├─rhel-root                                       253:0    0  221G   0 lvm   /var/lib/containers/storage/overlay
     │          /
     └─rhel-swap                                       253:1    0    1G   0 lvm sdd                                                     8:48   0 1.8T 0 disk └─ceph--9b031aa3--1d29--4709--9870--6ac3b48abf74-osd--block-- b7330837-- b986--46d7--9e28--57db65945098                                                        253:4    0  1.8T   0 lvm sde                                                     8:64   0 2.7T 0 disk └─ceph--d400720f--8236--4689--b3ac--0300514ac42c-osd-- block--0575ecbc--4acb--4cb1--a9a7--607d63a891b3                                                        253:6    0  2.7T   0 lvm sdf                                                     8:80   0 1.8T 0 disk └─ceph--a4a3f8ea--6c2e--4f2d--ac57--fa8e8cfb02b0-osd-- block--8a0c4a74-- fedc--46b3--b2a8--d60fd18a37c1                                                        253:8    0  1.8T   0 lvm sdg                                                     8:96   0 2.7T 0 disk └─ceph--3a664362--832a--4419--99ae--595a2bb86749-osd--block-- c96afbde--9c71--408f--a961--68c6d14a701f                                                        253:12   0  2.7T   0 lvm sdh                                                     8:112  0 16.4T 0 disk └─ceph--6fa9be6b--485b--4433--8e05--a17a6a9d0b70-osd-- block--29747e0e--9c71--44e6--b750--93a7878977ee                                                        253:14   0 16.4T   0 lvm sdi                                                     8:128  0 16.4T 0 disk └─ceph--9d1359c4--4af6--489e--974a--c89a5b2160aa-osd-- block--618f7582-- fce0--41f6--aad8--6d0231ef303a                                                        253:16   0 16.4T   0 lvm sdj                                                     8:144  0 1.8T 0 disk └─ceph--5a61c09b--027e--4882--8b93--6688d9e98dfa-osd-- block--8e9b21b6-- cc39--4c7b--b5f8--9e83e33fa146                                                        253:18   0  1.8T   0 lvm sdk                                                     8:160  0 447.1G 0 disk └─ceph--4b4f3bd9--16be--493a--8e35--84643d1b327c-osd-- block--14f68253-- b370--4300--a319--0c39311a34e1                                                        253:10   0 447.1G   0 lvm sdl                                                     8:176  0 186.3G 0 disk └─ceph--fc7e9d84--c650--4a4b--9b53--6a748c9dcad8-osd-- block--2b475461-- a85d--4e7b--a7e2--8ab1c9d14c6e                                                        253:11   0 186.3G   0 lvm sdm                                                     8:192  0 3.6T 0 disk └─ceph--ec427ec1--e621--4981--9a58--d9cdf7a909b5-osd--block-- f08c7e71-- ddf5--4939--8f0c--42396de2210b                                                        253:26   0  3.6T   0 lvm sdn                                                     8:208  0 2.7T 0 disk └─ceph--9ee3a783--1aa1--4520--83b7--d804972bc7b2-osd-- block--40cd9045--587a--4831--95f7--607c019ef862                                                        253:20   0  2.7T   0 lvm sdo                                                     8:224  0 2.7T 0 disk └─ceph--7807748d--305c--4cdf--9812--0a6005e99579-osd--block-- c2087805--441c--422a--b12f--de10a75b7e0b                                                        253:22   0  2.7T   0 lvm sdp                                                     8:240  0 3.6T 0 disk └─ceph--d67be0c9--859f--4ac5--8895--18a50fa2a2d7-osd--block-- c0ef5f16--4bcd--4390--abf8--260c5913cb14                                                        253:24   0  3.6T   0 lvm sdq                                                    65:0    0 3.6T 0 disk └─ceph--f8d07270--2bcd--49bc--bd43--ee3f2fbaa5ff-osd-- block--8f783ade--6da6--4967--b87c--fb6d1827460f                                                        253:28   0  3.6T   0 lvm sdr                                                    65:16   0 16.4T 0 disk └─ceph--71ff9863--a1e1--4ca3--ad23--732b207d4ee4-osd--block-- a3dbca80--9cf5--4c98--ad9b--b38803230b1f                                                        253:30   0 16.4T   0 lvm sds                                                    65:32   0 3.6T 0 disk └─ceph--172e2cab--1835--4b7b--a765--3530092e99dd-osd-- block--032fc777--69d9--4f39--8fe6--485e0959ce66                                                        253:32   0  3.6T   0 lvm sdt                                                    65:48   0 16.4T 0 disk └─ceph--8bad9f81--5851--464d--89bc--fa645e05934e-osd--block-- ff12d109-- dfaa--4608--a5e2--36ec39623f36                                                        253:34   0 16.4T   0 lvm sdu                                                    65:64   0 3.6T 0 disk └─ceph--1a629cf6--c438--4d97--b90d--7b56032d10d5-osd--block-- b3e38c3d-- e3c0--49c2--9e96--93f9e4c909d4                                                        253:36   0  3.6T   0 lvm sdv                                                    65:80   0 3.6T 0 disk sdw                                                    65:96   0 5.5T 0 disk sdx                                                    65:112  0 3.6T 0 disk sdy                                                    65:128  0 16.4T 0 disk nvme2n1                                               259:2    0 2.9T 0 disk ├─ceph--9fc0a64d--9ab8--4b12--9a8f--6a48e6c95211-osd--db-- d28b3fa5--5ace--49a1--ad3d--fc4d14f1b8db │                                                     253:7    0 270.1G 0 lvm ├─ceph--9fc0a64d--9ab8--4b12--9a8f--6a48e6c95211-osd--db-- feb3bd0f--858b--45c5--a1ba--c0c77f34dc0d │                                                     253:15   0 270.1G 0 lvm ├─ceph--9fc0a64d--9ab8--4b12--9a8f--6a48e6c95211-osd-- db--1ddcc845--799a--4d0d--96f1--90078e2cf0cf │                                                     253:21   0 270.1G 0 lvm ├─ceph--9fc0a64d--9ab8--4b12--9a8f--6a48e6c95211-osd--db-- d4f3c31f--2327--4662--9517--f86dbe35c510 │                                                     253:27   0 270.1G 0 lvm └─ceph--9fc0a64d--9ab8--4b12--9a8f--6a48e6c95211-osd--db-- eebae904--4ae7--4550--8c3a--0af3ef1fec1c                                                        253:29   0 270.1G   0 lvm nvme0n1                                               259:3    0 2.9T 0 disk ├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--12f47f4e-- a197--4ecf--a021--94b135039661 │                                                     253:3    0 270.1G 0 lvm ├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db--da108348-- fc04--4e89--823d--5ebdf26e0408 │                                                     253:19   0 270.1G 0 lvm ├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db-- ee274757--51d8--48e3--a41e--b2b321da7170 │                                                     253:25   0 270.1G 0 lvm ├─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd-- db--79ce0245--338a--432c--92dd--1437dcaf3917 │                                                     253:35   0 270.1G 0 lvm └─ceph--c2aeb797--fc4a--4054--9d12--7c5550ac1641-osd--db-- e4e2b277--03c3--47aa--bf32--12f63faee4e5                                                        253:37   0 270.1G   0 lvm nvme3n1                                               259:6    0 1.5T 0 disk ├─ceph--20c0fa86--7668--4139--bf39--f7bb8b2e1623-osd--db--3f4c63c1-- e23c--4390--a034--54d4a224b2a2 │                                                     253:5    0 270.1G 0 lvm └─ceph--20c0fa86--7668--4139--bf39--f7bb8b2e1623-osd--db--a45b02f5-- ad6d--453d--91bc--8a52f1bfa533                                                        253:13   0 270.1G   0 lvm nvme1n1                                               259:7    0 1.5T 0 disk ├─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd-- db--36f82df5--953b--450a--b9d7--5e2ba334a0e7 │                                                     253:9    0 270.1G 0 lvm ├─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd--db--857ec827-- ff63--43d4--a4e8--43681ad8229b │                                                     253:17   0 270.1G 0 lvm ├─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd--db--b370a498-- d3b9--4ddd--b752--ab95e86bc027 │                                                     253:23   0 270.1G 0 lvm ├─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd-- db--5468cc56--54aa--4965--8a32--cf4d6b29fb3a │                                                     253:31   0 270.1G 0 lvm └─ceph--f1df0c25--920d--4d13--ba48--ad56302a4099-osd-- db--6d4941f5--96d6--4e67--81f5--77780e1a3ab0                                                        253:33   0 270.1G   0 lvm
[root@franky ~]#
"

Mvh.

Torkil



--
Torkil Svensgaard
Sysadmin
MR-Forskningssektionen, afs. 714
DRCMR, Danish Research Centre for Magnetic Resonance
Hvidovre Hospital
Kettegård Allé 30
DK-2650 Hvidovre
Denmark
Tel: +45 386 22828
E-mail: tor...@drcmr.dk

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

--
Torkil Svensgaard
Sysadmin
MR-Forskningssektionen, afs. 714
DRCMR, Danish Research Centre for Magnetic Resonance
Hvidovre Hospital
Kettegård Allé 30
DK-2650 Hvidovre
Denmark
Tel: +45 386 22828
E-mail: tor...@drcmr.dk
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to