[ceph-users] Re: About ceph osd slow ops

2023-12-01 Thread Josh Baergen
Given that this is s3, are the slow ops on index or data OSDs? (You mentioned HDD but I don't want to assume that meant that the osd you mentioned is data) Josh On Fri, Dec 1, 2023 at 7:05 AM VÔ VI wrote: > > Hi Stefan, > > I am running replicate x3 with a failure domain as host and setting > mi

[ceph-users] Re: About ceph osd slow ops

2023-12-01 Thread VÔ VI
Hi Stefan, I am running replicate x3 with a failure domain as host and setting min_size pool is 1. Because my cluster s3 traffic real time and can't stop or block IO, the data may be lost but IO alway available. I hope my cluster can run with two nodes unavailable. After that two nodes is down at

[ceph-users] Re: About ceph osd slow ops

2023-12-01 Thread Stefan Kooman
On 01-12-2023 08:45, VÔ VI wrote: Hi community, My cluster running with 10 nodes and 2 nodes goes down, sometimes the log shows the slow ops, what is the root cause? My osd is HDD and block.db and wal is 500GB SSD per osd. Health check update: 13 slow ops, oldest one blocked for 167 sec, osd.10