[ceph-users] RadosGW User Limit?

2015-05-15 Thread Daniel Schneller
Hello! I am wondering if there is a limit to the number of (Swift) users that should be observed when using RadosGW. For example, if I were to offer storage via S3 or Swift APIs with Ceph and RGW as the backing implementation and people could just sign up through some kind of public website, n

Re: [ceph-users] How to debug a ceph read performance problem?

2015-05-15 Thread Christian Balzer
Hello, On Thu, 14 May 2015 14:32:11 +0800 changqian zuo wrote: > Hi, > > 1. The network problem has been partly resovled, we removed bonding of > Juno node (Ceph client side), and now IO comes back: > So the problem was not (really) with the Ceph cluster at all. > [root@controller fio-rbd]# r

[ceph-users] Deleting RGW Users

2015-05-15 Thread Daniel Schneller
Hello! In our cluster we had a nasty problem recently due to a very large number of buckets for a single RadosGW user. The bucket limit was disabled earlier, and the number of buckets grew to the point where OSDs started to go down due to excessive access times, missed heartbeats etc. We hav

[ceph-users] Still need keyring if cephx is disabled?

2015-05-15 Thread Ding Dinghua
In our deploy enviroment, cephx will be disabled, and ceph is installed manully, so, my question is whether keyring is needed or we can skip the create-keyring step? Ceph version is firefly(0.80.9), thanks a lot. -- Ding Dinghua ___ ceph-users mailing l

[ceph-users] force_create_pg stuck on creating

2015-05-15 Thread flisky
Hi list, I reformatted some OSDs to increase the journal_size, and just did it in the hurry, some pgs have lost data and in the incomplete status. The cluster is stuck in 'creating' status after **ceph osd lost xx** and **force_create_pg**. I find the dir 'osd-xx/current/xx.xxx_head' only co

Re: [ceph-users] Write freeze when writing to rbd image and rebooting one of the nodes

2015-05-15 Thread Vasiliy Angapov
Hi, Robert, Here is my crush map. # begin crush map tunable choose_local_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 # devices device 0 osd.0 device 1 osd.1 device 2 osd.2 device 3 osd.3 device 4 osd.4 device 5 osd.5 device 6 osd.6 device 7

Re: [ceph-users] RadosGW User Limit?

2015-05-15 Thread Gregory Farnum
On Fri, May 15, 2015 at 12:04 AM, Daniel Schneller wrote: > Hello! > > I am wondering if there is a limit to the number of (Swift) users that > should be observed when using RadosGW. > For example, if I were to offer storage via S3 or Swift APIs with Ceph and > RGW as the backing implementation an

Re: [ceph-users] force_create_pg stuck on creating

2015-05-15 Thread flisky
All OSDs are up and in, and crushmap should be okay. ceph -s: health HEALTH_WARN 9 pgs stuck inactive 9 pgs stuck unclean 149 requests are blocked > 32 sec too many PGs per OSD (4393 > max 300) pool .rgw.buckets has too few pgs

Re: [ceph-users] force_create_pg stuck on creating

2015-05-15 Thread flisky
After I restart all OSD daemons one batch by one batch, the number requests are blocked goes down. and RGWs would come back online. The 'netstat -anp|grep radosgw|grep ESTABLISHED|wc -l' eventually goes the 25 (24 OSDs + 1 MON) About the slow request, the log shows - 'slow request 15360.083412 s

Re: [ceph-users] RBD images -- parent snapshot missing (help!)

2015-05-15 Thread Tuomas Juntunen
Hey Pavel Could you share your C program and the process how you were able to fix the images. Thanks Tuomas -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Pavel V. Kaygorodov Sent: 13. toukokuuta 2015 18:24 To: Jason Dillaman Cc: ceph-users