> > Not surprising for HDDs. Double your deep-scrub interval.
>
> Done!
If your PG ratio is low, say <200, bumping pg_num may help as well. Oh yeah,
looking up your gist from a prior message, you average around 70 PG replicas
per OSD. Aim for 200.
Your index pool has way too few PGs.
> Not surprising for HDDs. Double your deep-scrub interval.
Done!
> So you’re relying on the SSD DB device for the index pool? Have you
looked at your logs / metrics for those OSDs to see if there is any
spillover?
> What type of SSD are you using here? And how many HDD OSDs do you have
using
> On Oct 15, 2024, at 9:28 AM, Harry Kominos wrote:
>
> Hello Anthony and thank you for your response!
>
> I have placed the requested info in a separate gist here:
> https://gist.github.com/hkominos/85dc46f3ce7037ec23ac6e1e2535e885
> 3826 pgs not deep-scrubbed in time
> 1501 pgs not scrubbed
Hello Anthony and thank you for your response!
I have placed the requested info in a separate gist here:
https://gist.github.com/hkominos/85dc46f3ce7037ec23ac6e1e2535e885
Every OSD is an HDD, with their corresponding index, on a partition in an
SSD device. And we are talking about 18 separate dev
> Hello Ceph Community!
>
> I have the following very interesting problem, for which I found no clear
> guidelines upstream so I am hoping to get some input from the mailing list.
> I have a 6PB cluster in operation which is currently half full. The cluster
> has around 1K OSD, and the RGW data
Thanks everyone for all of your guidance. To answer all the questions!
> Are the OSD nodes connected with 10Gb as well?
Yes
> Are you using SSDs for your index pool? How many?
Yes, for a node with 39 HDD OSDs we are using 6 Index SSDs
> How big are your objects?
Most test run at 64K, but I have
Hi,
what is the nbproc setting on the haproxy ?
Hi,
On 25/09/2020 20:39, Dylan Griff wrote:
We have 10Gb network to our two RGW nodes behind a single ip on
haproxy, and some iperf testing shows I can push that much; latencies
look okay. However, when using a small cosbench cluster I am unab
Hi,
On 25/09/2020 20:39, Dylan Griff wrote:
We have 10Gb network to our two RGW nodes behind a single ip on
haproxy, and some iperf testing shows I can push that much; latencies
look okay. However, when using a small cosbench cluster I am unable to
get more than ~250Mb of read speed total.
A
Can you share the object size details. Try to increase gradually to say 1gb
and measure.
Thanks
On Sat, 26 Sep, 2020, 1:10 am Dylan Griff, wrote:
> Hey folks!
>
> Just shooting this out there in case someone has some advice. We're
> just setting up RGW object storage for one of our new Ceph clus