Sure, thanks!

But i think client like qemu use librbd will also run into this perf 
statistics, any way
we could dump the perf statistics for other client side like qemu.

Sorry if there exist some document on internet and i didn't got it by searching.

Thanks!

------------------                               
hzwulibin
2015-10-29

-------------------------------------------------------------
发件人:Sage Weil <s...@newdream.net>
发送日期:2015-10-29 08:07
收件人:Libin Wu
抄送:ceph-devel,ceph-users
主题:Re: values of "ceph daemon osd.x perf dump objecters " are zero

Objecter is the client side, but you're dumping stats on the osd.  The 
only time it is used as a client there is with cache tiering.

sage

On Wed, 28 Oct 2015, Libin Wu wrote:

> Hi, all
> 
> As my understand, command "ceph daemon osd.x perf dump objecters" should
> output the perf data of osdc(librados). But when i use this command,
> why all those values are zero expcept map_epoch and map_inc. Follow is
> the result(It has fio test with rbd ioengine on the cluster):
> 
> 
> $ sudo ceph daemon osd.10 perf dump objecter
> 
> {
> 
>     "objecter": {
> 
>         "op_active": 0,
> 
>         "op_laggy": 0,
> 
>         "op_send": 0,
> 
>         "op_send_bytes": 0,
> 
>         "op_resend": 0,
> 
>         "op_ack": 0,
> 
>         "op_commit": 0,
> 
>         "op": 0,
> 
>         "op_r": 0,
> 
>         "op_w": 0,
> 
>         "op_rmw": 0,
> 
>         "op_pg": 0,
> 
>         "osdop_stat": 0,
> 
>         "osdop_create": 0,
> 
>         "osdop_read": 0,
> 
>         "osdop_write": 0,
> 
>         "osdop_writefull": 0,
> 
>         "osdop_append": 0,
> 
>         "osdop_zero": 0,
> 
>         "osdop_truncate": 0,
> 
>         "osdop_delete": 0,
> 
>         "osdop_mapext": 0,
> 
>         "osdop_sparse_read": 0,
> 
>         "osdop_clonerange": 0,
> 
>         "osdop_getxattr": 0,
> 
>         "osdop_setxattr": 0,
> 
>         "osdop_cmpxattr": 0,
> 
>         "osdop_rmxattr": 0,
> 
>         "osdop_resetxattrs": 0,
> 
>         "osdop_tmap_up": 0,
> 
>         "osdop_tmap_put": 0,
> 
>         "osdop_tmap_get": 0,
> 
>         "osdop_call": 0,
> 
>         "osdop_watch": 0,
> 
>         "osdop_notify": 0,
> 
>         "osdop_src_cmpxattr": 0,
> 
>         "osdop_pgls": 0,
> 
>         "osdop_pgls_filter": 0,
> 
>         "osdop_other": 0,
> 
>         "linger_active": 0,
> 
>         "linger_send": 0,
> 
>         "linger_resend": 0,
> 
>         "linger_ping": 0,
> 
>         "poolop_active": 0,
> 
>         "poolop_send": 0,
> 
>         "poolop_resend": 0,
> 
>         "poolstat_active": 0,
> 
>         "poolstat_send": 0,
> 
>         "poolstat_resend": 0,
> 
>         "statfs_active": 0,
> 
>         "statfs_send": 0,
> 
>         "statfs_resend": 0,
> 
>         "command_active": 0,
> 
>         "command_send": 0,
> 
>         "command_resend": 0,
> 
>         "map_epoch": 2180,
> 
>         "map_full": 0,
> 
>         "map_inc": 83,
> 
>         "osd_sessions": 0,
> 
>         "osd_session_open": 0,
> 
>         "osd_session_close": 0,
> 
>         "osd_laggy": 0
> 
>     }
> 
> }
> 
> Anyone could tell why?
> 
> Thanks!
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to