1024 PGs on NVMe.

From: Anthony D'Atri <anthony.da...@gmail.com>
Sent: Friday, February 2, 2024 2:37 PM
To: Cory Snyder <csny...@1111systems.com>
Subject: Re: [ceph-users] OSD read latency grows over time 
 
Thanks. What type of media are your index OSDs? How many PGs? > On Feb 2, 2024, 
at 2: 32 PM, Cory Snyder <csnyder@ 1111systems. com> wrote: > > Yes, we changed 
osd_memory_target to 10 GB on just our index OSDs. These OSDs have over 
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender 
This message came from outside your organization. 
Report Suspicious 
 
ZjQcmQRYFpfptBannerEnd
Thanks.  What type of media are your index OSDs? How many PGs?

> On Feb 2, 2024, at 2:32 PM, Cory Snyder <csny...@1111systems.com> wrote:
> 
> Yes, we changed osd_memory_target to 10 GB on just our index OSDs. These OSDs 
> have over 300 GB of lz4 compressed bucket index omap data. Here is a graph 
> showing the latencies before/after that single change:
> 
> https://urldefense.com/v3/__https://pasteboard.co/IMCUWa1t3Uau.png__;!!J0dtj8f0ZRU!l4XNLVA0N9y347MkNZ_gcnzLaYG9S6nLx_nGR0bzUw6SlThh6f8gvXzqzRUOMnLOMVpnNFDi9OQ9TqWsJN8gDPN11WfU$
> 
> Cory Snyder
> 
> 
> From: Anthony D'Atri <anthony.da...@gmail.com>
> Sent: Friday, February 2, 2024 2:15 PM
> To: Cory Snyder <csny...@1111systems.com>
> Cc: ceph-users <ceph-users@ceph.io>
> Subject: Re: [ceph-users] OSD read latency grows over time 
>  
> You adjusted osd_memory_target? Higher than the default 4GB? Another thing 
> that we've found is that rocksdb can become quite slow if it doesn't have 
> enough memory for internal caches. As our cluster usage has grown, we've 
> needed to increase 
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender 
> This message came from outside your organization. 
> Report Suspicious 
>  
> ZjQcmQRYFpfptBannerEnd
> You adjusted osd_memory_target?  Higher than the default 4GB?
> 
> 
> 
> Another thing that we've found is that rocksdb can become quite slow if it 
> doesn't have enough memory for internal caches. As our cluster usage has 
> grown, we've needed to increase OSD memory in accordance with bucket index 
> pool usage. One one cluster, we found that increasing OSD memory improved 
> rocksdb latencies by over 10x.
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to