Hi Yehuda, Here's my ceph.conf
root@p01:/tmp# cat /etc/ceph/ceph.conf [global] fsid = 6e05675c-f545-4d88-9784-ea56ceda750e mon_initial_members = s01, s02, s03 mon_host = 192.168.2.61,192.168.2.62,192.168.2.63 auth_supported = cephx osd_journal_size = 1024 filestore_xattr_use_omap = true [client.radosgw.gateway] host = p01 keyring = /etc/ceph/keyring.radosgw.gateway rgw_socket_path = /tmp/radosgw.sock log_file = /var/log/ceph/radosgw.log rgw_thread_pool_size = 200 Depends on my conf, the /tmp/radosgw.sock was created while starting radosgw service. So that I tried to show up config by : root@p01:/tmp# ceph --admin-daemon /tmp/radosgw.sock config show read only got 0 bytes of 4 expected for response length; invalid command? Is it a bug or operation mistake ? root@p01:/tmp# radosgw-admin -v ceph version 0.61.8 (a6fdcca3bddbc9f177e4e2bf0d9cdd85006b028b) Appreciate ~ +Hugo Kuo+ (+886) 935004793 2013/9/11 Yehuda Sadeh <yeh...@inktank.com> > On Wed, Sep 11, 2013 at 7:57 AM, Kuo Hugo <tonyt...@gmail.com> wrote: > > > > Hi Yehuda, > > > > I tried ... a question for modifying param. > > How to make it effect to the RadosGW ? is it by restarting radosgw ? > > The value was set to 200. I'm not sure if it's applied to RadosGW or not. > > > > Is there a way to check the runtime value of "rgw thread pool size" ? > > > > You can do it through the admin socket interface. > Try running something like: > $ ceph --admin-daemon /var/run/ceph/radosgw.asok config show > > $ ceph --admin-daemon /var/run/ceph/radosgw.asok config set > rgw_thread_pool_size 200 > > > The path to the admin socket may be different, and in any case can be > set through the 'admin socket' variable in ceph.conf. > > Yehuda > > > > > > > > 2013/9/11 Yehuda Sadeh <yeh...@inktank.com> > >> > >> Try modifing the 'rgw thread pool size' param in your ceph.conf. By > default it's 100, so try increasing it and see if it affects anything. > >> > >> Yehuda > >> > >> > >> On Wed, Sep 11, 2013 at 3:14 AM, Kuo Hugo <tonyt...@gmail.com> wrote: > >>> > >>> For ref : > >>> > >>> Benchmark result > >>> > >>> Could someone help me to improve the performance of high concurrency > use case ? > >>> > >>> Any suggestion would be excellent.! > >>> > >>> +Hugo Kuo+ > >>> (+886) 935004793 > >>> >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com