Hi,
we've upgraded from reef to squid lately and are encountering the problem
that the orchestrator now puts the block.db together with the data on the
same HDD.

We've set the orch config to use non rotational devices for the block.db
and the rotational devices for the data. We use one SSD/NVMe for multiple
HDDs.

We've now encountered that in two different clusters where we replaced HDDs.
The existing block.db VGs still have capacity left.

Below the output of lsblk with the mentioned disk, ceph orch ls export
(only the osd service), vgs

sds
                           65:32   0  14.6T  0 disk
├─ceph--92b5842c--8df3--43ef--9814--6707eb970cc6-osd--block--f0fe19ce--7b79--466f--a889--1935aa3e8b85
252:8    0  14.1T  0 lvm
│ └─YbVkPH-f3ZG-IZfE-jdGc-CCf6-597l-uo8J3W
                           252:59   0  14.1T  0 crypt
└─ceph--92b5842c--8df3--43ef--9814--6707eb970cc6-osd--db--1a1a654e--145f--45b5--a5b0--9cb322db6093
   252:26   0   430G  0 lvm
  └─Xpt9TE-3Z40-1lvl-j8iS-OWYj-mQXx-waw2cR
                           252:77   0   430G  0 crypt


---
service_type: osd
service_id: osd_1_ssd
service_name: osd.osd_1_ssd
placement:
  host_pattern: '*'
spec:
  data_devices:
    limit: 2
    rotational: 0
    size: :2TB
    vendor: SAMSUNG
  encrypted: true
  filter_logic: AND
  objectstore: bluestore
---
service_type: osd
service_id: osd_2_hdd_8tb
service_name: osd.osd_2_hdd_8tb
placement:
  host_pattern: '*'
spec:
  block_db_size: 343597383680
  data_devices:
    rotational: 1
    size: :10TB
  db_devices:
    rotational: 0
    size: :2TB
  db_slots: 5
  encrypted: true
  filter_logic: AND
  objectstore: bluestore
---
service_type: osd
service_id: osd_4_hdd_16tb
service_name: osd.osd_4_hdd_16tb
placement:
  hosts:
  - s3db23
spec:
  block_db_size: 461708984320
  data_devices:
    rotational: 1
    size: '10TB:'
  db_devices:
    rotational: 0
    size: 3TB:4TB
  db_slots: 8
  encrypted: true
  filter_logic: AND
  objectstore: bluestore

  VG                                        #PV #LV #SN Attr   VSize  VFree
  ceph-0161de84-5042-4925-a959-b9d214852511   1   4   0 wz--n-  3.49t 1.81t
  ceph-02da2287-597a-4619-a3d3-9358cbb60622   1   1   0 wz--n- 14.55t    0
  ceph-05813ea9-d14a-4e6c-81b0-9605f0001766   1   1   0 wz--n- 14.55t    0
  ceph-06d405e7-abfa-4f5d-b4e3-7229bb96f2fd   1   1   0 wz--n- 14.55t    0
  ceph-18ac75a3-d187-4fc2-a8aa-3b6a26b70246   1   1   0 wz--n- 14.55t    0
  ceph-19a0db63-89ed-4f2e-b5ab-961dc0f8d413   1   5   0 wz--n-  3.49t 1.39t
  ceph-2ecb6728-0fc3-4622-acd7-298ec3a02c02   1   1   0 wz--n- 14.55t    0
  ceph-306a63bc-5a8a-44d9-af80-d4cf825ceeca   1   5   0 wz--n-  3.49t 1.39t
  ceph-47a79684-9cba-4e51-b143-c05f3307a876   1   1   0 wz--n- <1.75t    0
  ceph-4d80d916-db9b-46f6-9e34-29ae35879e8e   1   1   0 wz--n- 14.55t    0
  ceph-5fe43992-3918-4a81-bbee-fc08e1329c0b   1   1   0 wz--n- <1.75t    0
  ceph-84fb035b-0c57-4249-98d5-92fdb010867d   1   1   0 wz--n- 14.55t    0
  ceph-92b5842c-8df3-43ef-9814-6707eb970cc6   1   2   0 wz--n- 14.55t    0
<------
  ceph-96d2979a-ac09-492a-a4c5-466fb8ca44c3   1   1   0 wz--n- 14.55t    0
  ceph-a49a0de2-a9e9-47d4-aff2-e96660c4267a   1   1   0 wz--n- 14.55t    0
  ceph-ac752e7a-db71-4c3b-9158-dcc3b6aebcb7   1   1   0 wz--n- 14.55t    0
  ceph-c6e94f1e-0b48-4966-9e31-a67232733a63   1   1   0 wz--n- 14.55t    0
  ceph-c791562d-69b2-4463-ae53-77b0997c1cd7   1   1   0 wz--n- 14.55t    0
  ceph-c7bbf14b-ac87-47cc-86f9-173a113a9b80   1   1   0 wz--n- 14.55t    0
  ceph-cd38189c-b8c9-40e1-a528-d645838144e4   1   1   0 wz--n- 14.55t    0
  ceph-cf93829f-086b-451a-b5cb-b4ca9462f660   1   1   0 wz--n- 14.55t    0
  ceph-d6dfd5aa-635d-4ba3-8efd-15764f4cdd18   1   1   0 wz--n- 14.55t    0
  ceph-e247cda8-e206-42bc-ad0b-88639944ddf2   1   1   0 wz--n- 14.55t    0
  ceph-e3f5355e-8653-46c4-8ea2-ae632196a062   1   5   0 wz--n-  3.49t 1.39t
  ceph-eee43b8f-6653-4b9d-8547-f40d94927840   1   1   0 wz--n- 14.55t    0
  ceph-f208c77f-9821-4580-ab7b-63093f12a671   1   1   0 wz--n- 14.55t    0
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to