You must have a quorum or MORE than 50% of your monitors functioning for the
cluster to function. With one of two you only have 50% which isn't enough and
stops i/o.
Sent from my iPad
> On Oct 11, 2013, at 11:28 PM, "飞" wrote:
>
> hello, I am a new user of ceph,
> I have built a ceph testing
hello, I am a new user of ceph,
I have built a ceph testing Environment for block storage,
I have 2 osd and 2 monitor,In addition to failover test, other tests are normal.
when I perform failover test, if I stop one osd , the cluster is OK,
but if I stop one monitor , the cluster have entire die ,
I was wondering if something like this:
http://www.osqa.net/
might be a bit more useful than setting up a brand new forum.
There is a lot of help available between the mailing list and both of the irc
rooms, however there are common questions that definitely seem to come up over
and over again
On Fri, Oct 11, 2013 at 7:46 AM, Valery Tschopp
wrote:
> Hi,
>
> Since we upgraded ceph to 0.67.4, the radosgw-admin doesn't list all the
> users anymore:
>
> root@ineri:~# radosgw-admin user info
> could not fetch user info: no user info saved
>
>
> But it still work for single user:
>
> root@ine
Just a thought; did you try setting noop scheduler for the SSDs?
I guess the journal is written uncached (?) So maybe sticking the SSDs
behind BBWC might help by reducing write latency to near zero. Also
maybe wear rate might be lower on the SSD too (if journal IO straddles
physical cells).
On Thu, Oct 10, 2013 at 12:47 PM, Sergey Pimkov wrote:
> Hello!
>
> I'm testing small CEPH pool consists of some SSD drives (without any
> spinners). Ceph version is 0.67.4. Seems like write performance of this
> configuration is not so good as possible, when I testing it with small block
> size
Hi
i've also tested 4k performance and found similar results with fio and iozone
tests as well as simple dd. I've noticed that my io rate doesn't go above 2k-3k
in the virtual machines. I've got two servers with ssd journals but spindles
for the osd. I've previusly tried to use nfs + zfs on th
Without more details it sounds like you're just overloading the
cluster. How are the clients generating their load — is there any
throttling?
4 gateways can probably process on the order of 15k ops/second; each
of those PUT ops is going to require 3 writes to the disks on the
backend (times whateve
On Fri, Oct 11, 2013 at 2:49 AM, wrote:
> Hi
>
>
>
> I am installing Ceph using the chef cookbook recipes and I am having an
> issue with ceph-osd-all-starter
>
>
>
> Here’s a dump from client.log
>
>
>
>
>
> Error e
Hi,
Since we upgraded ceph to 0.67.4, the radosgw-admin doesn't list all the
users anymore:
root@ineri:~# radosgw-admin user info
could not fetch user info: no user info saved
But it still work for single user:
root@ineri:~# radosgw-admin user info --uid=valery
{ "user_id": "valery",
"di
On 10/10/2013 02:47 PM, Sergey Pimkov wrote:
Hello!
I'm testing small CEPH pool consists of some SSD drives (without any
spinners). Ceph version is 0.67.4. Seems like write performance of this
configuration is not so good as possible, when I testing it with small
block size (4k).
Pool configur
Hi,
I was wondering, why did you use CephFS instead of RBD?
RBD is much more reliable and well integrated with QEMU/KVM.
Or perhaps you want to try CephFS?
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance
Hello to all,
Here is my context :
- Ceph cluster composed of 72 OSDs (ie 72 disks).
- 4 radosgw gateways
- Round robin DNS for load balancing accross gateways
My goal is to test / bench the S3 API.
Here is my scenario, with 300 clients from 300 différents hosts :
1) each client uploading abou
Hi
I am installing Ceph using the chef cookbook recipes and I am having an issue
with ceph-osd-all-starter
Here's a dump from client.log
Error executing action `start` on resource 'service[ceph_osd]'
==
On 10/11/2013 10:25 AM, Ansgar Jazdzewski wrote:
Hi,
i updated my cluster yesterday an all is gone well.
But Today i got an error i never seen before.
-
# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 1 scrub errors
pg 2.5 is active+clean+inconsistent, acting [9,4]
1 scrub errors
-
The documentation page at http://ceph.com/docs/master/radosgw/config/states:
Important Check the key output. Sometimes radosgw-admin generates a key
with an escape (\) character, and some clients do not know how to handle
escape characters. Remedies include removing the escape character (\),
encap
Hi,
i updated my cluster yesterday an all is gone well.
But Today i got an error i never seen before.
-
# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 1 scrub errors
pg 2.5 is active+clean+inconsistent, acting [9,4]
1 scrub errors
-
any idea to fix it?
after i did the up grade i cr
Hi All:
I installed gateway on my cluster. but always get 403 response:
for bucket in conn.get_all_buckets():
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 387,
in get_all_buckets
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseEr
Hi,
I'd have to say in general I agree with the other responders. Not really
for reasons of preferring a ML over a forum necessarily, but just because
the ML already exists. One of the biggest challenges for anyone new coming
in to an open source project such as ceph is availability of informati
19 matches
Mail list logo