Hello,

We have deployed ceph cluster and we are trying to debug a massive drop in
performance between the RADOS layer vs the RGW layer

## Cluster config
4 OSD nodes (12 Drives each, NVME Journals, 1 SSD drive) 40GbE NIC
2 RGW nodes ( DNS RR load balancing) 40GbE NIC
3 MON nodes 1 GbE NIC

## Pool configuration
RGW data pool  - replicated 3x 4M stripe (HDD)
RGW metadata pool - replicated 3x (SSD) pool

## Benchmarks
4K Read IOP/s performance using RADOS Bench 48,000~ IOP/s
4K Read RGW performance via s3 interface ~ 130 IOP/s

Really trying to understand how to debug this issue. all the nodes never
break 15% CPU utilization and there is plenty of RAM. The one pathological
issue in our cluster is that the MON nodes are currently on VMs that are
sitting behind a single 1 GbE NIC. (We are in the process of moving them,
but are unsure if that will fix the issue.

What metrics should we be looking at to debug the RGW layer. Where do we
need to look?

---

Ravi Patel, PhD
Machine Learning Systems Lead
Email: r...@kheironmed.com

-- 










*Kheiron Medical Technologies*

kheironmed.com 
<http://kheironmed.com/> | supporting radiologists with deep learning


Kheiron Medical Technologies Ltd. is a registered company in England and 
Wales. This e-mail and its attachment(s) are intended for the above named 
only and are confidential. If they have come to you in error then you must 
take no action based upon them but contact us immediately. Any disclosure, 
copying, distribution or any action taken or omitted to be taken in 
reliance on it is prohibited and may be unlawful. Although this e-mail and 
its attachments are believed to be free of any virus, it is the 
responsibility of the recipient to ensure that they are virus free. If you 
contact us by e-mail then we will store your name and address to facilitate 
communications. Any statements contained herein are those of the individual 
and not the organisation.




Registered number: 10184103. Registered 
office: RocketSpace, 40 Islington High Street, London, N1 8EQ
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to