Great, this helped a lot. Although "ceph iostat" didn't give iostats of single 
images, but just general overview of IO, i remembered the new nautilus RDB 
performance monitoring.

https://ceph.com/rbd/new-in-nautilus-rbd-performance-monitoring/

With a "simple"
>rbd perf image iotop
i was able to see that the writes indeed are from the Log Server and the Zabbix 
Monitoring Server. I didn't expect that it would cause that much I/O... 
unbelieveable...

----- Ursprüngliche Mail -----
Von: "Ashley Merrick" <singap...@amerrick.co.uk>
An: "i schmidt" <i.schm...@langeoog.de>
CC: "ceph-users" <ceph-users@ceph.io>
Gesendet: Montag, 14. Oktober 2019 15:20:46
Betreff: Re: [ceph-users] Constant write load on 4 node ceph cluster

Is the storage being used for the whole VM disk? 

If so have you checked none of your software is writing constant log's? Or 
something that could continuously write to disk. 

If your running a new version you can use : [ 
https://docs.ceph.com/docs/mimic/mgr/iostat/ | 
https://docs.ceph.com/docs/mimic/mgr/iostat/ ] to locate the exact RBD image. 




_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to