[ceph-users] Re: Ceph RGW performance guidelines

2024-10-15 Thread Harry Kominos
Hello Anthony and thank you for your response! I have placed the requested info in a separate gist here: https://gist.github.com/hkominos/85dc46f3ce7037ec23ac6e1e2535e885 Every OSD is an HDD, with their corresponding index, on a partition in an SSD device. And we are talking about 18 separate dev

[ceph-users] Ceph RGW performance guidelines

2024-10-15 Thread Harry Kominos
Hello Ceph Community! I have the following very interesting problem, for which I found no clear guidelines upstream so I am hoping to get some input from the mailing list. I have a 6PB cluster in operation which is currently half full. The cluster has around 1K OSD, and the RGW data pool has 4096

[ceph-users] Re: Ceph RGW performance guidelines

2024-10-21 Thread Harry Kominos
load on more devices, but my knowledge of ceph internals is nearly 0. Regards, Harry On Tue, Oct 15, 2024 at 4:26 PM Anthony D'Atri wrote: > > > > On Oct 15, 2024, at 9:28 AM, Harry Kominos wrote: > > > > Hello Anthony and thank you for your response! > &g