Hi,

I'd like to understand if the following behaviour is a bug.
I'm running ceph 16.2.9.

In a new OSD node with 24 hdd (16 TB each) and 2 ssd (1.44 TB each), I'd like 
to have "ceph orch" allocate WAL and DB on the ssd devices.

I use the following service spec:
spec:
  data_devices:
    rotational: 1
    size: '14T:'
  db_devices:
    rotational: 0
    size: '1T:'
  db_slots: 12

This results in each OSD having a 60GB volume for WAL/DB, which equates to 50% 
total usage in the VG on each ssd, and 50% free.
I honestly don't know what size to expect, but exactly 50% of capacity makes me 
suspect this is due to a bug:
https://tracker.ceph.com/issues/54541
(In fact, I had run into this bug when specifying block_db_size rather than 
db_slots)

Questions:
  Am I being bit by that bug?
  Is there a better approach, in general, to my situation?
  Are DB sizes still governed by the rocksdb tiering? (I thought that this was 
mostly resolved by https://github.com/ceph/ceph/pull/29687 )
  If I provision a DB/WAL logical volume size to 61GB, is that effectively a 
30GB database, and 30GB of extra room for compaction?

Thanks,
Patrick
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to