Re: [ceph-users] Rados gw upload problems

2013-10-04 Thread Darren Birkett
Hi Warren, Try using the ceph specific fastcgi module as detailed here: http://ceph.com/docs/next/radosgw/manual-install/ And see if that helps. There was a similar discussion on the list previously: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-March/000360.html Thanks Darren

Re: [ceph-users] PG distribution scattered

2013-10-04 Thread Gruher, Joseph R
Question about the ideal number of PGs. This is the advice I've read for a single pool: 50-100 PGs per OSD or total_PGs = (OSDs * 100) / Replicas What happens as the number of pools increases? Should each pool have that same number of PGs, or do I need to increase or decrease the number of PG

[ceph-users] ceph access using curl

2013-10-04 Thread Snider, Tim
I'm having pilot error with getting the path correct using curl. Bucket listing using "radosgw-admin bucket list" works as does the swift API. Can someone point out my (obvious) error? Bucket list works: root@controller21:/home/ceph/my-cluster# radosgw-admin bucket list 2013-10-04 11:28:13.144065

[ceph-users] v0.67.4 released

2013-10-04 Thread Sage Weil
This point release fixes an important performance issue with radosgw, keystone authentication token caching, and CORS. All users (especially those of rgw) are encouraged to upgrade. Notable changes: * crush: fix invalidation of cached names * crushtool: do not crash on non-unique bucket ids

Re: [ceph-users] ceph access using curl

2013-10-04 Thread Darren Birkett
Try using passing '--debug' to the swift command. It should output the equivalent curl command for you to use. - Darren "Snider, Tim" wrote: >I'm having pilot error with getting the path correct using curl. >Bucket listing using "radosgw-admin bucket list" works as does the >swift API. >Can som

[ceph-users] ceph uses too much disk space!!

2013-10-04 Thread Linux Chips
Hi every one; we have a small testing cluster, one node with 4 OSDs of 3TB each. i created one RBD image of 4TB. now the cluster is nearly full: # ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 11178G 1783G 8986G80.39 POOLS: NAME ID USED