Perhaps unbelanced OSDs?
Could you send us an osd tree Output?

- Mehmet 

Am 24. März 2018 19:46:44 MEZ schrieb "da...@visions.se" <da...@visions.se>:
>You have 2 drives at almost 100% util which means they are maxed. So
>you need more disks or better drives to fix your io issues (SSDs for
>MySQL is a no brainer really) 
>------ Ursprungligt meddelande------Från: Sam HuracanDatum: lör 24 mars
>2018 19:20Till: c...@elchaka.de;Kopia:
>ceph-users@lists.ceph.com;Ämne:Re: [ceph-users] Fwd: High IOWait Issue
>This is from iostat:
>I'm using Ceph jewel, has no HW error.Ceph  health OK, we've just use
>50% total volume.
>
>
>2018-03-24 22:20 GMT+07:00  <c...@elchaka.de>:
>I would Check with Tools like atop the utilization  of your Disks also.
>Perhaps something Related in dmesg or dorthin?
>
>
>
>- Mehmet   
>
>Am 24. März 2018 08:17:44 MEZ schrieb Sam Huracan
><nowitzki.sa...@gmail.com>:
>
>Hi guys,We are running a production OpenStack backend by Ceph.
>At present, we are meeting an issue relating to high iowait in VM, in
>some MySQL VM, we see sometime IOwait reaches  abnormal high peaks
>which lead to slow queries increase, despite load is stable (we test
>with script simulate real load), you can see in
>graph.https://prnt.sc/ivndni
>
>MySQL VM are place on Ceph HDD Cluster, with 1 SSD journal for 7 HDD.
>In this cluster, IOwait on each ceph host is about
>20%.https://prnt.sc/ivne08
>
>Can you guy help me find the root cause of this issue, and how to
>eliminate this high iowait?
>Thanks in advance.
>
>
>
>_______________________________________________
>
>ceph-users mailing list
>
>ceph-users@lists.ceph.com
>
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to