Hi,
Am 3/11/25 um 11:33 schrieb Frédéric Nass:
$ cephadm shell --name osd.OSD_ID --fsid $(ceph fsid) ceph-bluestore-tool --path
/var/lib/ceph/osd/ceph-OSD_ID --sharding="m(3) p(3,0-12)
O(3,0-13)=block_cache={type=binned_lru} L P" reshard
As a consequence, the 'Object' key/value pairs were st
Hi Robert,
Thanks for pointing that out. This issue stems from version differences between
our cluster environments:
Our current Pacific cluster (migrating to Reef next week) uses the default
configuration:
bluestore_rocksdb_cfs = 'm(3) p(3,0-12) O(3,0-13)=block_cache={type=binned_lru}
L P'
Hi,
For the record, we have identified the root cause of the overspilling issue.
Previously, an ambiguity in the RocksDB resharding documentation led us to
reshard our RocksDB databases using a lowercase 'o' instead of an uppercase 'O'
in the command:
$ cephadm shell --name osd.OSD_ID --fsid $
Yep, we're using RocksDB compression with Pacific since a few month. It helped
a lot.
Since we're talking overspilling... Despite using
bluestore_volume_selection_policy=use_some_extra with resharded RocksDB
databases we can still observe many OSDs overspilling from time to time
(approximatel
Yes, it improves the dynamic where only ~3, 30, 300, etc. GB of DB space can be
used, and thus mitigates spillover. Previously a, say, 29GB DB
device/partition would be like 85% unused.
With recent releases one can also turn on DB compression, which should have a
similar benefit.
> On Nov 12,
Hi Anthony,
Did the RocksDB sharding end up improving the overspilling situation related to
the level thresholds? I had only anticipated that it would reduce the impact of
compaction.
We reshared our OSD's RocksDBs a long time ago (after upgrading to Pacific
IIRC) and I think we could still
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Yes, that is correct.
On Tue, Nov 12, 2024 at 8:51 PM Frédéric Nass
wrote:
>
> Hello Alexander,
>
> Thank you for clarifying this point. The documentation was not very clear
> about the 'improvements'.
>
> Does that mean that in the latest releases overspilling no longer occurs
> between the tw
Hello Alexander,
Thank you for clarifying this point. The documentation was not very clear about
the 'improvements'.
Does that mean that in the latest releases overspilling no longer occurs
between the two thresholds of 30GB and 300GB? Meaning block.db can be 80GB in
size without overspilling,
Hello Frédéric,
The advice regarding 30/300 GB DB sizes is no longer valid. Since Ceph
15.2.8, due to the new default (bluestore_volume_selection_policy =
use_some_extra), it no longer wastes the extra capacity of the DB
device.
On Tue, Nov 12, 2024 at 5:52 PM Frédéric Nass
wrote:
>
>
>
> -
On 2024/11/12 04:54, Alwin Antreich wrote:
Hi Roland,
On Mon, Nov 11, 2024, 20:16 Roland Giesler wrote:
I have ceph 17.2.6 on a proxmox cluster and want to replace some ssd's
who are end of life. I have some spinners who have their journals on
SSD. Each spinner has a 50GB SSD LVM partition
- Le 12 Nov 24, à 8:51, Roland Giesler rol...@giesler.za.net a écrit :
> On 2024/11/12 04:54, Alwin Antreich wrote:
>> Hi Roland,
>>
>> On Mon, Nov 11, 2024, 20:16 Roland Giesler wrote:
>>
>>> I have ceph 17.2.6 on a proxmox cluster and want to replace some ssd's
>>> who are end of life. I
Hi Roland,
On Mon, Nov 11, 2024, 20:16 Roland Giesler wrote:
> I have ceph 17.2.6 on a proxmox cluster and want to replace some ssd's
> who are end of life. I have some spinners who have their journals on
> SSD. Each spinner has a 50GB SSD LVM partition and I want to move those
> each to new c
13 matches
Mail list logo