I hope you did 1 minute interval of iostat. Based on your iostat & disk info.


·         avgrq-sz  is showing 750.49  & avgqu-sz is showing 17.39

·         375.245 KB is your average block size.

·         That said, your disk is showing a Quee of 17.39 length. Typically 
higher Q length will increase your disk IO wait whether its read or write.

Hope you have the picture of your IO now & hope this info helps.

>> I tried with fio 64k block size and various io depth ( 1.2.4.8.16….128) and 
>> I can’t reproduce the problem.
Try approx. 375.245 KB & 32 Q-Depth & see what’s your iostat looking, if it’s 
same then that’s what your disk can do.

Now if you want to compare ceph RDB perf. Do the same on a normal block device.

--
Deepak



From: Matteo Dacrema [mailto:mdacr...@enter.eu]
Sent: Tuesday, March 07, 2017 1:17 PM
To: Deepak Naidu
Cc: ceph-users
Subject: Re: [ceph-users] MySQL and ceph volumes

Hi Deepak,

thank you.

Here an example of iostat

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.16    0.00    2.64   15.74    0.00   76.45

Device:         rrqm/s                 wrqm/s            r/s                    
w/s                   rkB/s    wkB/s              avgrq-sz                      
 avgqu-sz                      await   r_await            w_await          
svctm   %util
vda               0.00        0.00     0.00     0.00                 0.00       
          0.00                             0.00                             
0.00                 0.00     0.00     0.00     0.00     0.00
vdb               0.00        1.00     96.00   292.00             4944.00       
    14065 2.00      750.49                         17.39                        
   43.89               17.79               52.47               2.58     100.00

vdb is the ceph volumes with xfs fs.


Disk /dev/vdb: 2199.0 GB, 2199023255552 bytes
255 heads, 63 sectors/track, 267349 cylinders, total 4294967296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/vdb1               1  4294967295  2147483647+  ee  GPT

Regards
Matteo

Il giorno 07 mar 2017, alle ore 22:08, Deepak Naidu 
<dna...@nvidia.com<mailto:dna...@nvidia.com>> ha scritto:

My response is without any context to ceph or any SDS, purely how to check the 
IO bottleneck. You can then determine if its Ceph or any other process or disk.

>> MySQL can reach only 150 iops both read or writes showing 30% of IOwait.
Lower IOPS is not issue with itself as your block size might be higher. But 
MySQL doing higher block not sure.  You can check below iostat metrics to see 
why is the IO wait higher.

*  avgqu-sz(Avg queue length)                        -->  Higher the Q length 
more the IO wait
* avgrq-sz[The average size (in sectors)]     -->  Shows IOblock size( check 
this when using mysql). [ you need to calculate this based on your FS block 
size in KB & don’t just you the avgrq-sz # ]


--
Deepak



-----Original Message-----
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Matteo 
Dacrema
Sent: Tuesday, March 07, 2017 12:52 PM
To: ceph-users
Subject: [ceph-users] MySQL and ceph volumes

Hi All,

I have a galera cluster running on openstack with data on ceph volumes capped 
at 1500 iops for read and write ( 3000 total ).
I can’t understand why with fio I can reach 1500 iops without IOwait and MySQL 
can reach only 150 iops both read or writes showing 30% of IOwait.

I tried with fio 64k block size and various io depth ( 1.2.4.8.16….128) and I 
can’t reproduce the problem.

Anyone can tell me where I’m wrong?

Thank you
Regards
Matteo

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
________________________________
This email message is for the sole use of the intended recipient(s) and may 
contain confidential information.  Any unauthorized review, use, disclosure or 
distribution is prohibited.  If you are not the intended recipient, please 
contact the sender by reply email and destroy all copies of the original 
message.
________________________________

--
Questo messaggio e' stato analizzato con Libra ESVA ed e' risultato non infetto.
Clicca qui per segnalarlo come 
spam.<http://mx01.enter.it/cgi-bin/learn-msg.cgi?id=DCF01401CF.AE456>
Clicca qui per metterlo in 
blacklist<http://mx01.enter.it/cgi-bin/learn-msg.cgi?blacklist=1&id=DCF01401CF.AE456>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to