On Tue, Sep 3, 2013 at 9:13 AM, Fuchs, Andreas (SwissTXT)
<andreas.fu...@swisstxt.ch> wrote:
> We are testing radosgw with cyberduck, so far we see the following issues
>
> 1. In apache error log for each file put we see:
>
> [Tue Sep 03 17:35:24 2013] [warn] FastCGI: 193.218.104.138 PUT 
> https://193.218.100.131/test/tesfile04.iso auth AWS ***
> [Tue Sep 03 17:35:24 2013] [warn] FastCGI: JJJ gotCont=1
>
> - This is with the ceph version of apache/fastcgi.
> - This happens on each file, 1 time at the beginning of the transfer and at 
> least at large files, several times at the end.
> - The standard apache/fastcgi combo does not show this issue.
> About what do we get warned here?
>

That's a debug output that I thought was long gone. We need to remove it.

>
> 2. Large file transfer of 4GB iso image with cyberduck
> Uploads of a 4GB .iso file fail at the end of each upload, the file is then 
> shown with 0B in directory listing
>
> At the end of the upload I get the following in rados log
> 2013-09-03 18:00:49.427092 7faf87fff700  2 
> RGWDataChangesLog::ChangesRenewThread: start
> 2013-09-03 18:01:11.427201 7faf87fff700  2 
> RGWDataChangesLog::ChangesRenewThread: start
> 2013-09-03 18:01:33.427310 7faf87fff700  2 
> RGWDataChangesLog::ChangesRenewThread: start
> 2013-09-03 18:01:36.557121 7faf46ff5700 10 x>> x-amz-acl:public-read
> 2013-09-03 18:01:36.557567 7faf46ff5700 20 get_obj_state: rctx=0x7faf540251c0 
> obj=test-han:tesfile05.iso state=0x7faf5412ecd8 s->prefetch_data=0
> 2013-09-03 18:01:36.558581 7faf46ff5700  0 setting object 
> write_tag=default.6641.36
> 2013-09-03 18:01:36.574302 7faf46ff5700 20 get_obj_state: rctx=0x7faf540251c0 
> obj=test-han:_shadow__Bw6LkKRYSmAwEh5VBmxo7fb5Md-IY7d_1 state=0x7faf54028488 
> s->prefetch_data=0
> 2013-09-03 18:01:36.575215 7faf46ff5700 20 get_obj_state: s->obj_tag was set 
> empty
> 2013-09-03 18:01:36.575250 7faf46ff5700 20 prepare_atomic_for_write_impl: 
> state is not atomic. state=0x7faf54028488
> 2013-09-03 18:01:36.579593 7faf46ff5700 20 get_obj_state: rctx=0x7faf540251c0 
> obj=test-han:_shadow__Bw6LkKRYSmAwEh5VBmxo7fb5Md-IY7d_2 state=0x7faf5412ea28 
> s->prefetch_data=0
> 2013-09-03 18:01:36.580581 7faf46ff5700 20 get_obj_state: s->obj_tag was set 
> empty
> 2013-09-03 18:01:36.580635 7faf46ff5700 20 prepare_atomic_for_write_impl: 
> state is not atomic. state=0x7faf5412ea28
> 2013-09-03 18:01:36.585058 7faf46ff5700 20 get_obj_state: rctx=0x7faf540251c0 
> obj=test-han:_shadow__Bw6LkKRYSmAwEh5VBmxo7fb5Md-IY7d_3 state=0x7faf5412aaa8 
> s->prefetch_data=0
> 2013-09-03 18:01:36.585956 7faf46ff5700 20 get_obj_state: s->obj_tag was set 
> empty
> 2013-09-03 18:01:36.586005 7faf46ff5700 20 prepare_atomic_for_write_impl: 
> state is not atomic. state=0x7faf5412aaa8
> 2013-09-03 18:01:36.589811 7faf46ff5700 20 get_obj_state: rctx=0x7faf540251c0 
> obj=test-han:_shadow__Bw6LkKRYSmAwEh5VBmxo7fb5Md-IY7d_4 state=0x7faf5412a068 
> s->prefetch_data=0
> 2013-09-03 18:01:36.590886 7faf46ff5700 20 get_obj_state: s->obj_tag was set 
> empty
> 2013-09-03 18:01:36.590921 7faf46ff5700 20 prepare_atomic_for_write_impl: 
> state is not atomic. state=0x7faf5412a068
> 2013-09-03 18:01:36.594977 7faf46ff5700 20 get_obj_state: rctx=0x7faf540251c0 
> obj=test-han:_shadow__Bw6LkKRYSmAwEh5VBmxo7fb5Md-IY7d_5 state=0x7faf54126468 
> s->prefetch_data=0
> 2013-09-03 18:01:36.596140 7faf46ff5700 20 get_obj_state: s->obj_tag was set 
> empty
> .
> .
> hunderts of similar  lines
> .
> .
> .
> 2013-09-03 18:01:42.210941 7faf46ff5700 20 get_obj_state: rctx=0x7faf540251c0 
> obj=test-han:_shadow__Bw6LkKRYSmAwEh5VBmxo7fb5Md-IY7d_1038 
> state=0x7faf54172738 s->prefetch_data=0
> 2013-09-03 18:01:42.211888 7faf46ff5700 20 get_obj_state: s->obj_tag was set 
> empty
> 2013-09-03 18:01:42.211947 7faf46ff5700 20 prepare_atomic_for_write_impl: 
> state is not atomic. state=0x7faf54172738
> 2013-09-03 18:01:42.216903 7faf46ff5700 20 get_obj_state: rctx=0x7faf540251c0 
> obj=test-han:_shadow__Bw6LkKRYSmAwEh5VBmxo7fb5Md-IY7d_1039 
> state=0x7faf54172a88 s->prefetch_data=0
> 2013-09-03 18:01:42.217757 7faf46ff5700 20 prepare_atomic_for_write_impl: 
> state is not atomic. state=0x7faf54172a88
> 2013-09-03 18:01:42.221839 7faf46ff5700  0 WARNING: set_req_state_err 
> err_no=27 resorting to 500

error 27 means EFBIG. I do see a few newish error paths in the osd
that may return this. Can you try setting the following on your osd:

osd max attr size = 655360

> 2013-09-03 18:01:42.221868 7faf46ff5700  2 req 36:402.848669:s3:PUT 
> /test-han/tesfile05.iso:put_obj:http status=500
> 2013-09-03 18:01:42.223232 7faf46ff5700  1 ====== req done req=0x25e24b0 
> http_status=500 ======
> 2013-09-03 18:01:55.427440 7faf87fff700  2 
> RGWDataChangesLog::ChangesRenewThread: start
> 2013-09-03 18:02:01.331323 7faf9d011780 20 enqueued request req=0x25e2020
> 2013-09-03 18:02:01.331341 7faf9d011780 20 RGWWQ:
> 2013-09-03 18:02:01.331343 7faf9d011780 20 req: 0x25e2020
> 2013-09-03 18:02:01.331352 7faf9d011780 10 allocated request req=0x25e2500
> 2013-09-03 18:02:01.331377 7faf307c8700 20 dequeued request req=0x25e2020
> 2013-09-03 18:02:01.331385 7faf307c8700 20 RGWWQ: empty
>
>
>
>
>
>
> 3. performance
> While I get great performance from the radosgw to the stoage nodes, the 
> performance from cyberduck to the radosgw is poor (10 times slower)
>
> On a rados filesystem mounted on the radosgw I get: arround 100MB/s write 
> troughput (here I know it's the 1Gb sync network between the nodes who is the 
> limiting factor).
> With a cyberduck test I only get 10MB/s and I cannot identify the bottleneck, 
> network between testclient and radosgw is 1Gb and verified.

Can you verify that you have enough pgs per pool? Also, turning off
the debug log might help some:

debug rgw = 0

There are a few other logs that might affect, but I think that at this
point everything is already turned off by default.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to