Thank you very much - I think I've solved the whole thing. It wasn't in radosgw.

The solution was,
- increase the timeout in Apache conf.
- when using haproxy, also increase the timeouts there!


Georg

On 22.05.2014 15:36, Yehuda Sadeh wrote:
On Thu, May 22, 2014 at 6:16 AM, Georg Höllrigl
<georg.hoellr...@xidras.com> wrote:
Hello List,

Using the radosgw works fine, as long as the amount of data doesn't get too
big.

I have created one bucket that holds many small files, separated into
different "directories". But whenever I try to acess the bucket, I only run
into some timeout. The timeout is at around 30 - 100 seconds. This is
smaller then the Apache timeout of 300 seconds.

I've tried to access the bucket with different clients - one thing is s3cmd
- which still is able to upload things, but takes rather long time, when
listing the contents.
Then I've  tried with s3fs-fuse - which throws
ls: reading directory .: Input/output error

Also Cyberduck and S3Browser show a similar behaivor.

Is there an option, to only send back maybe 1000 list entries, like Amazon
das? So that the client might decide, if he want's to list all the contents?


That how it works, it doesn't return more than 1000 entries at once.


Are there any timeout values in radosgw?

Are you sure the timeout is in the gateway itself? Could be apache
that is timing out. Will need to see the apache access logs for these
operations, radosgw debug and messenger logs (debug rgw = 20, debug ms
= 1), to give a better answer.

Yehuda

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to