I've been using ceph for nearly a year and one of the things I ran into
quite a while back was that it seems like ceph is placing copies of
objects on different OSDs but sometimes those OSDs can be on the same
host by default. Is that correct? I discovered this by taking down one
host and having so
On Fri, May 04, 2018 at 12:08:35AM PDT, Tracy Reed spake thusly:
> I've been using ceph for nearly a year and one of the things I ran into
> quite a while back was that it seems like ceph is placing copies of
> objects on different OSDs but sometimes those OSDs can be on the same
> host by default.
On Fri, May 04, 2018 at 12:18:15AM PDT, Tracy Reed spake thusly:
> https://jcftang.github.io/2012/09/06/going-from-replicating-across-osds-to-replicating-across-hosts-in-a-ceph-cluster/
> How can I tell which way mine is configured? I could post the whole
> crushmap if necessary but it's a bit lar
On Fri, May 4, 2018 at 7:26 AM, Tracy Reed wrote:
> Hello all,
>
> I can seemingly enable the balancer ok:
>
> $ ceph mgr module enable balancer
>
> but if I try to check its status:
>
> $ ceph balancer status
> Error EINVAL: unrecognized command
This generally indicates that something went wrong
Le vendredi 04 mai 2018 à 00:25 -0700, Tracy Reed a écrit :
> On Fri, May 04, 2018 at 12:18:15AM PDT, Tracy Reed spake thusly:
> > https://jcftang.github.io/2012/09/06/going-from-replicating-across-
> > osds-to-replicating-across-hosts-in-a-ceph-cluster/
>
>
> > How can I tell which way mine is c
Hi,
On 04/05/18 08:25, Tracy Reed wrote:
> On Fri, May 04, 2018 at 12:18:15AM PDT, Tracy Reed spake thusly:
>> https://jcftang.github.io/2012/09/06/going-from-replicating-across-osds-to-replicating-across-hosts-in-a-ceph-cluster/
>
>> How can I tell which way mine is configured? I could post the
On Fri, May 4, 2018 at 7:21 AM, Tracy Reed wrote:
> My ceph status says:
>
> cluster:
> id: b2b00aae-f00d-41b4-a29b-58859aa41375
> health: HEALTH_OK
>
> services:
> mon: 3 daemons, quorum ceph01,ceph03,ceph07
> mgr: ceph01(active), standbys: ceph-ceph07, ceph03
> osd: 7
I get this too, since I last rebooted a server (one of three).
ceph -s says:
cluster:
id: a8c34694-a172-4418-a7dd-dd8a642eb545
health: HEALTH_OK
services:
mon: 3 daemons, quorum box1,box2,box3
mgr: box3(active), standbys: box1, box2
osd: N osds: N up, N in
rgw: 3
On Fri, May 4, 2018 at 1:22 AM, Akshita Parekh wrote:
> Steps followed during installing ceph-
> 1) Installing rpms
>
> Then the steps given in -
> http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ , apart from step
> 2 and 3
>
> Then ceph-deploy osd prepare osd1:/dev/sda1
> ceph
Hi Valery,
Did you eventually find a workaround for this? I *think* we'd also
prefer rgw to fallback to external plugins, rather than checking them
before local. But I never understood the reasoning behind the change
from jewel to luminous.
I saw that there is work towards a cache for ldap [1] an
Hi Dan,
We agreed in upstream RGW to make this change. Do you intend to
submit this as a PR?
regards
Matt
On Fri, May 4, 2018 at 10:57 AM, Dan van der Ster wrote:
> Hi Valery,
>
> Did you eventually find a workaround for this? I *think* we'd also
> prefer rgw to fallback to external plugins,
Most of this is over my head but the last line of the logs on both mds
servers show something similar to:
0> 2018-05-01 15:37:46.871932 7fd10163b700 -1 *** Caught signal
(Segmentation fault) **
in thread 7fd10163b700 thread_name:mds_rank_progr
When I search for this in ceph user and devel maili
yes correct,but the main issue is, the osd configuration gets lost after
every reboot
On Fri, May 4, 2018 at 6:11 PM, Alfredo Deza wrote:
> On Fri, May 4, 2018 at 1:22 AM, Akshita Parekh
> wrote:
> > Steps followed during installing ceph-
> > 1) Installing rpms
> >
> > Then the steps given in -
On Fri, May 4, 2018 at 1:59 AM John Spray wrote:
> On Fri, May 4, 2018 at 7:21 AM, Tracy Reed wrote:
> > My ceph status says:
> >
> > cluster:
> > id: b2b00aae-f00d-41b4-a29b-58859aa41375
> > health: HEALTH_OK
> >
> > services:
> > mon: 3 daemons, quorum ceph01,ceph03,ceph07
Hi,
I have a big-ish cluster that, amongst other things, has a radosgw
configured to have an EC data pool (k=12, m=4). The cluster is
currently running Jewel (10.2.7).
That pool spans 244 HDDs and has 2048 PGs.
from the df detail:
.rgw.buckets.ec 26 -N/A N/A
And also make sure the OSD<-> mapping is correct with "ceph osd tree". :)
On Fri, May 4, 2018 at 1:44 AM Matthew Vernon wrote:
> Hi,
>
> On 04/05/18 08:25, Tracy Reed wrote:
> > On Fri, May 04, 2018 at 12:18:15AM PDT, Tracy Reed spake thusly:
> >>
> https://jcftang.github.io/2012/09/06/going-fro
On Wed, May 2, 2018 at 7:19 AM, Sean Sullivan wrote:
> Forgot to reply to all:
>
> Sure thing!
>
> I couldn't install the ceph-mds-dbg packages without upgrading. I just
> finished upgrading the cluster to 12.2.5. The issue still persists in 12.2.5
>
> From here I'm not really sure how to do gener
17 matches
Mail list logo