> We have multiple gateways behind haproxy but seems like some user can max out 
> some value in rgw which makes the request super slow like 54 sec for a 7kb 
> object:
>
> 2025-11-13T01:16:07.367+0700 7f9bb27fc640 1 beast: 0x7f9d24402710: 1.1.1.1 - 
> user [13/Nov/2025:01:15:12.683 +0700] "PUT /002b92966f0b HTTP/1.1" 200 7754 - 
> "aws-sdk-java/1.12.201 Linux/5.15.0-143-generic 
> OpenJDK_64-Bit_Server_VM/21.0.9+10-LTS java/21.0.9 scala/2.12.18 
> kotlin/1.3.50 vendor/Eclipse_Adoptium cfg/retry-mode/legacy" - 
> latency=54.683818817s
>
> The server itself on the network card/CPU/memory didn't spike, on the backend 
> osds I don't see high latency either so would be cusrious which values should 
> be tuned to increase performance for individual gateway.

We don't tune a lot on our rgws in terms of rgw settings, but for our
~10 OSD hosts with nvme-wal/db spindisk we have 3 haproxy LBs as VMs,
and 9 rgws also as VMs in front of this ceph cluster and reach some
1.5-2 GB/s ingestion speeds if we use many threads/clients on an idle
cluster. The only gotcha we once got into was when the haproxy VMs
mistakenly did not get "same cpu features as the host" but instead
some super conservative lowest-possible x86 compatibility mode so one
could migrate it from the current hosts to a 386 and back more or
less, then the https decoding in the haproxy got slow, but it was very
visible as high cpu load on the proxies. I say you should test as
Kirby wrote, run an "rbd bench" from one of the RGWs, then you know
what that instance could write to the cluster.

Given that it is such a super long time for a tiny object, I would
almost guess some other kind of fault like MTU issues (making tcp
lower packet size and retry sends or something) or DNS resolvings at
some end, or if that was more of a singular event, the measure of when
one item exactly hit the spot where rgw decided to autoshard the
bucket because it went from 99999 objs to 100000 and the client had to
wait for it to finish before finishing the write.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to