Re: [ceph-users] REQUEST_SLOW across many OSDs at the same time

2019-04-01 Thread mart.v
" " Thanks for this advice. It helped me to identify a subset of devices (only 3 of the whole cluster) where was this problem happening. The SAS adapter (LSI SAS 3008) on my Supermicro board was the issue. There is a RAID mode enabled by default. I have flashed the latest firmware (v

Re: [ceph-users] REQUEST_SLOW across many OSDs at the same time

2019-03-11 Thread mart.v
ime) > > Cheers, Massimo > > On Fri, Feb 22, 2019 at 10:28 AM mart.v wrote: >> >> Hello everyone, >> >> I'm experiencing a strange behaviour. My cluster is relatively small (43 OSDs, 11 nodes), running Ceph 12.2.10 (and Proxmox 5). Nodes are connected via 10

Re: [ceph-users] REQUEST_SLOW across many OSDs at the same time

2019-02-25 Thread mart.v
" - As far as I understand the reported 'implicated osds' are only the primary ones. In the log of the osds you should find also the relevant pg number, and with this information you can get all the involved OSDs. This might be useful e.g. to see if a specific OSD node is always involved. This w

Re: [ceph-users] REQUEST_SLOW across many OSDs at the same time

2019-02-25 Thread mart.v
times are different each day so it is not a periodic task. Martin -- Původní e-mail -- Od: David Turner Komu: mart.v Datum: 22. 2. 2019 12:23:37 Předmět: Re: [ceph-users] REQUEST_SLOW across many OSDs at the same time " Can you correlate the times to scheduled tasks inside o

Re: [ceph-users] REQUEST_SLOW across many OSDs at the same time

2019-02-25 Thread mart.v
and to debug some of these slow requests (to see which events take much time) > > Cheers, Massimo > > On Fri, Feb 22, 2019 at 10:28 AM mart.v wrote: >> >> Hello everyone, >> >> I'm experiencing a strange behaviour. My cluster is relatively small (43 OSDs,

[ceph-users] REQUEST_SLOW across many OSDs at the same time

2019-02-22 Thread mart.v
Hello everyone, I'm experiencing a strange behaviour. My cluster is relatively small (43 OSDs, 11 nodes), running Ceph 12.2.10 (and Proxmox 5). Nodes are connected via 10 Gbit network (Nexus 6000). Cluster is mixed (SSD and HDD), but with different pools. Descibed error is only on the SSD par

[ceph-users] Ceph Influx Plugin in luminous

2018-11-12 Thread mart.v
Hi, I'm trying to set up a Influx plugin (http://docs.ceph.com/docs/mimic/mgr/ influx/). The docs says that it will be available in Mimic release, but I can see it (and enable) in current Luminous. It seems that someone else acutally used it in Luminous (http://lists.ceph.com/pipermail/ceph-us

[ceph-users] Hybrid pool speed (SSD + SATA HDD)

2018-03-14 Thread mart.v
Hello everyone, I have been thinking about building  a hybrid storage pool (inspiration from this article: http://www.root314.com/ceph/2017/04/30/Ceph-hybrid-storage- tiers/). So instead of 3 replicas on SSD I plan to use 2 SSD and the third one will be plain old SATA HDD. I can easily arran