hi,all
i am testing   rbd performance, now there is only one vm which is using  rbd as 
its disk, and inside it  fio is doing r/w.
the big diffenence is that i set a big iodepth other than iodepth=1.
 
how do you think about it,  which part is using up cpu? i want to find the root 
cause.
 
 
---default options----
osd_op_threads": "2",
  "osd_disk_threads": "1",
  "osd_recovery_threads": "1",
"filestore_op_threads": "2",
 
 
thanks
 
 
 
top - 19:50:08 up 1 day, 10:26,  2 users,  load average: 1.55, 0.97, 0.81
Tasks:  97 total,   1 running,  96 sleeping,   0 stopped,   0 zombie
Cpu(s): 37.6%us, 14.2%sy,  0.0%ni, 37.0%id,  9.4%wa,  0.0%hi,  1.3%si,  0.5%st
Mem:   1922540k total,  1820196k used,   102344k free,    23100k buffers
Swap:  1048568k total,    91724k used,   956844k free,  1052292k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
                                                                                
                                                                           
 4312 root      20   0 1100m 337m 5192 S 107.3 18.0  88:33.27 ceph-osd          
                                                                                
                                                                           
 1704 root      20   0  514m 272m 3648 S  0.7 14.5   3:27.19 ceph-mon  
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to