Hello!
I am wondering if there is a limit to the number of (Swift) users that
should be observed when using RadosGW.
For example, if I were to offer storage via S3 or Swift APIs with Ceph
and RGW as the backing implementation and people could just sign up
through some kind of public website, n
Hello,
On Thu, 14 May 2015 14:32:11 +0800 changqian zuo wrote:
> Hi,
>
> 1. The network problem has been partly resovled, we removed bonding of
> Juno node (Ceph client side), and now IO comes back:
>
So the problem was not (really) with the Ceph cluster at all.
> [root@controller fio-rbd]# r
Hello!
In our cluster we had a nasty problem recently due to a very large
number of buckets for a single RadosGW user.
The bucket limit was disabled earlier, and the number of buckets grew
to the point where OSDs started to go down due to excessive access
times, missed heartbeats etc.
We hav
In our deploy enviroment, cephx will be disabled, and ceph is installed manully,
so, my question is whether keyring is needed or we can skip the
create-keyring step?
Ceph version is firefly(0.80.9),
thanks a lot.
--
Ding Dinghua
___
ceph-users mailing l
Hi list,
I reformatted some OSDs to increase the journal_size, and just did it in
the hurry, some pgs have lost data and in the incomplete status.
The cluster is stuck in 'creating' status after **ceph osd lost xx** and
**force_create_pg**. I find the dir 'osd-xx/current/xx.xxx_head' only
co
Hi, Robert,
Here is my crush map.
# begin crush map
tunable choose_local_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7
On Fri, May 15, 2015 at 12:04 AM, Daniel Schneller
wrote:
> Hello!
>
> I am wondering if there is a limit to the number of (Swift) users that
> should be observed when using RadosGW.
> For example, if I were to offer storage via S3 or Swift APIs with Ceph and
> RGW as the backing implementation an
All OSDs are up and in, and crushmap should be okay.
ceph -s:
health HEALTH_WARN
9 pgs stuck inactive
9 pgs stuck unclean
149 requests are blocked > 32 sec
too many PGs per OSD (4393 > max 300)
pool .rgw.buckets has too few pgs
After I restart all OSD daemons one batch by one batch, the number
requests are blocked goes down. and RGWs would come back online.
The 'netstat -anp|grep radosgw|grep ESTABLISHED|wc -l'
eventually goes the 25 (24 OSDs + 1 MON)
About the slow request, the log shows -
'slow request 15360.083412 s
Hey Pavel
Could you share your C program and the process how you were able to fix the
images.
Thanks
Tuomas
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Pavel
V. Kaygorodov
Sent: 13. toukokuuta 2015 18:24
To: Jason Dillaman
Cc: ceph-users
10 matches
Mail list logo