It's pretty common to see way more writes than reads if you got lots of idle VMs


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Mon, Oct 14, 2019 at 6:34 PM Ingo Schmidt <i.schm...@langeoog.de> wrote:
>
> Great, this helped a lot. Although "ceph iostat" didn't give iostats of 
> single images, but just general overview of IO, i remembered the new nautilus 
> RDB performance monitoring.
>
> https://ceph.com/rbd/new-in-nautilus-rbd-performance-monitoring/
>
> With a "simple"
> >rbd perf image iotop
> i was able to see that the writes indeed are from the Log Server and the 
> Zabbix Monitoring Server. I didn't expect that it would cause that much 
> I/O... unbelieveable...
>
> ----- Ursprüngliche Mail -----
> Von: "Ashley Merrick" <singap...@amerrick.co.uk>
> An: "i schmidt" <i.schm...@langeoog.de>
> CC: "ceph-users" <ceph-users@ceph.io>
> Gesendet: Montag, 14. Oktober 2019 15:20:46
> Betreff: Re: [ceph-users] Constant write load on 4 node ceph cluster
>
> Is the storage being used for the whole VM disk?
>
> If so have you checked none of your software is writing constant log's? Or 
> something that could continuously write to disk.
>
> If your running a new version you can use : [ 
> https://docs.ceph.com/docs/mimic/mgr/iostat/ | 
> https://docs.ceph.com/docs/mimic/mgr/iostat/ ] to locate the exact RBD image.
>
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to