[ceph-users] Re: Ceph RGW performance guidelines

2024-10-21 Thread Anthony D'Atri
> > Not surprising for HDDs. Double your deep-scrub interval. > > Done! If your PG ratio is low, say <200, bumping pg_num may help as well. Oh yeah, looking up your gist from a prior message, you average around 70 PG replicas per OSD. Aim for 200. Your index pool has way too few PGs.

[ceph-users] Re: Ceph RGW performance guidelines

2024-10-21 Thread Harry Kominos
> Not surprising for HDDs. Double your deep-scrub interval. Done! > So you’re relying on the SSD DB device for the index pool? Have you looked at your logs / metrics for those OSDs to see if there is any spillover? > What type of SSD are you using here? And how many HDD OSDs do you have using

[ceph-users] Re: Ceph RGW performance guidelines

2024-10-15 Thread Anthony D'Atri
> On Oct 15, 2024, at 9:28 AM, Harry Kominos wrote: > > Hello Anthony and thank you for your response! > > I have placed the requested info in a separate gist here: > https://gist.github.com/hkominos/85dc46f3ce7037ec23ac6e1e2535e885 > 3826 pgs not deep-scrubbed in time > 1501 pgs not scrubbed

[ceph-users] Re: Ceph RGW performance guidelines

2024-10-15 Thread Harry Kominos
Hello Anthony and thank you for your response! I have placed the requested info in a separate gist here: https://gist.github.com/hkominos/85dc46f3ce7037ec23ac6e1e2535e885 Every OSD is an HDD, with their corresponding index, on a partition in an SSD device. And we are talking about 18 separate dev

[ceph-users] Re: Ceph RGW performance guidelines

2024-10-15 Thread Anthony D'Atri
> Hello Ceph Community! > > I have the following very interesting problem, for which I found no clear > guidelines upstream so I am hoping to get some input from the mailing list. > I have a 6PB cluster in operation which is currently half full. The cluster > has around 1K OSD, and the RGW data

[ceph-users] Re: Ceph RGW Performance

2020-09-28 Thread Dylan Griff
Thanks everyone for all of your guidance. To answer all the questions! > Are the OSD nodes connected with 10Gb as well? Yes > Are you using SSDs for your index pool? How many? Yes, for a node with 39 HDD OSDs we are using 6 Index SSDs > How big are your objects? Most test run at 64K, but I have

[ceph-users] Re: Ceph RGW Performance [EXT]

2020-09-28 Thread Daniel Mezentsev
Hi, what is the nbproc setting on the haproxy ? Hi, On 25/09/2020 20:39, Dylan Griff wrote: We have 10Gb network to our two RGW nodes behind a single ip on haproxy, and some iperf testing shows I can push that much; latencies look okay. However, when using a small cosbench cluster I am unab

[ceph-users] Re: Ceph RGW Performance [EXT]

2020-09-28 Thread Matthew Vernon
Hi, On 25/09/2020 20:39, Dylan Griff wrote: We have 10Gb network to our two RGW nodes behind a single ip on haproxy, and some iperf testing shows I can push that much; latencies look okay. However, when using a small cosbench cluster I am unable to get more than ~250Mb of read speed total. A

[ceph-users] Re: Ceph RGW Performance

2020-09-25 Thread martin joy
Can you share the object size details. Try to increase gradually to say 1gb and measure. Thanks On Sat, 26 Sep, 2020, 1:10 am Dylan Griff, wrote: > Hey folks! > > Just shooting this out there in case someone has some advice. We're > just setting up RGW object storage for one of our new Ceph clus