Hello and thanks again!
Push help back to community:
1. Please correct this doc
http://ceph.com/docs/master/start/quick-rgw/#create-a-gateway-configuration-file
2. I have successful tested this clients - DragonDisk (
http://www.dragondisk.com/), CrossFTP (http://www.crossftp.com/) and
S3Browser (
Hello everybody,
I'm sending this here in case someone from the list is interested.
Ganeti [1] is a mentoring organization in this year's Google Summer
of Code and one of the Ideas proposed is:
"Better support for RADOS/Ceph in Ganeti"
Please see here:
http://code.google.com/p/ganeti/wiki/Summe
Hello,
Whether any best practices how to make Hing Availability of RadosGW?
For example, is this right way to create two or tree RadosGW (keys for
ceph-auth, directory and so on) and having for example this is ceph.conf:
[client.radosgw.a]
host = ceph01
...options...
[client.radosgw.b]
host = ce
Hi,
I've been trying to use block device recently. I have a running cluster
with 2 machines and 3 OSDs.
On a client machine, let's say A, I created a rbd image using `rbd create`
, then formatted, mounted and wrote something in it, everything was working
fine.
However, problem occurred when I tr
That is the expected behavior. RBD is emulating a real device, you wouldn't
expect good things to happen if you were to plug the same drive into two
different machines at once (perhaps with some soldering). There is no built in
mechanism for two machines to access the same block device concurr
Thank you, Mike.
The reason why I like block device is that it has the best reading
performance. But not being able to be shared is a fatal drawback here.
Maybe I should change some strategy or tuning the ceph fs. My goal is to
use a cluster to host a lot of small images, each of which is about 17
2013/5/1 Yudong Guang :
> The reason why I like block device is that it has the best reading
> performance. But not being able to be shared is a fatal drawback here. Maybe
> I should change some strategy or tuning the ceph fs. My goal is to use a
> cluster to host a lot of small images, each of whi
Hi everyone,
I'm setting up a test ceph cluster and am having trouble getting it running
(great for testing, huh?). I went through the installation on Debian
squeeze, had to modify the mkcephfs script a bit because it calls
monmaptool with too many paramaters in the $args variable (mine had "--add
Wyatt,
Please post your ceph.conf.
- mike
On 5/1/2013 12:06 PM, Wyatt Gorman wrote:
Hi everyone,
I'm setting up a test ceph cluster and am having trouble getting it
running (great for testing, huh?). I went through the installation on
Debian squeeze, had to modify the mkcephfs script a bit be
Here is my ceph.conf. I just figured out that the second host = isn't
necessary, though it is like that on the 5-minute quick start guide...
(Perhaps I'll submit my couple of fixes that I've had to implement so far).
That fixes the "redefined host" issue, but none of the others.
[global]
# For
Why would you update 'rgw usage max user shards' setting? I don't really
understand what it's for. Thank you.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Wyatt,
A few notes:
- Yes, the second "host = ceph" under mon.a is redundant and should be
deleted.
- "auth client required = cephx [osd]" should be simply
auth client required = cephx".
- Looks like you only have one OSD. You need at least as many (and
hopefully more) OSDs than highest rep
Well, those points solved the issue of the redefined host and the
unidentified protocol. The
"HEALTH_WARN 384 pgs degraded; 384 pgs stuck unclean; recovery 21/42
degraded (50.000%)"
error is still an issue, though. Is this something simple like some hard
drive corruption that I can clean up with
Hi Wyatt,
You need to reduce the replication level on your existing pools to 1, or
bring up another OSD. The default configuration specifies a replication
level of 2, and the default crush rules want to place a replica on two
distinct OSDs. With one OSD, CRUSH can't determine placement for the
r
On Wed, May 1, 2013 at 9:29 AM, Jeppesen, Nelson
wrote:
> Why would you update 'rgw usage max user shards' setting? I don’t really
> understand what it’s for. Thank you.
This param specifies the number of objects the usage data is being
written to. A higher number means that load spreads across,
Hi Wyatt,
This is almost certainly a configuration issue. If i recall, there is a
min_size setting in the CRUSH rules for each pool that defaults to two
which you may also need to reduce to one. I don't have the documentation
in front of me, so that's just off the top of my head...
Dino
On We
On Wed, May 1, 2013 at 1:32 PM, Dino Yancey wrote:
> Hi Wyatt,
>
> This is almost certainly a configuration issue. If i recall, there is a
> min_size setting in the CRUSH rules for each pool that defaults to two which
> you may also need to reduce to one. I don't have the documentation in front
[ Please keep all discussions on the list. :) ]
Okay, so you've now got just 128 that are sad. Those are all in pool
2, which I believe is "rbd" — you'll need to set your replication
level to 1 on all pools and that should fix it. :)
Keep in mind that with 1x replication you've only got 1 copy of
I added a blueprint for extending the crush rule language. If there are
interesting or strange placement policies you'd like to do and aren't able
to currently express using CRUSH, please help us out by enumerating them
on that blueprint.
Thanks!
sage
_
On Wed, May 1, 2013 at 2:44 PM, Sage Weil wrote:
> I added a blueprint for extending the crush rule language. If there are
> interesting or strange placement policies you'd like to do and aren't able
> to currently express using CRUSH, please help us out by enumerating them
> on that blueprint.
On 05/01/2013 04:51 PM, Gregory Farnum wrote:
> On Wed, May 1, 2013 at 2:44 PM, Sage Weil wrote:
>> I added a blueprint for extending the crush rule language. If there are
>> interesting or strange placement policies you'd like to do and aren't able
>> to currently express using CRUSH, please hel
21 matches
Mail list logo