Hi,
can you please advise which package(s) should be installed?
Thanks
Am 06.11.2019 um 22:28 schrieb Sage Weil:
> My current working theory is that the mgr is getting hung up when it tries
> to scrape the device metrics from the mon. The 'tell' mechanism used to
> send mon-targetted comma
On Thu, Nov 7, 2019 at 5:50 AM Karsten Nielsen wrote:
>
> -Original message-
> From: Yan, Zheng
> Sent: Wed 06-11-2019 14:16
> Subject:Re: [ceph-users] mds crash loop
> To: Karsten Nielsen ;
> CC: ceph-users@ceph.io;
> > On Wed, Nov 6, 2019 at 4:42 PM Karsten Nielsen
In Nautilus (Ubuntu Cloud Archive Train Version) the osd caps profile
rbd-read-only seems broken.
It is impossible to map a RBD if the user has the following caps:
[client.yyy]
key = AQBYL8NdHDpnERAAhk8XOKgFNwhUpCo3EMaW3g==
caps mgr = "profile rbd"
caps mon = "profile rbd"
Today I tried enabling RGW compression on a Nautilus 14.2.4 test cluster and
found it wasn't doing any compression at all. I figure I must have missed
something in the docs, but I haven't been able to find out what that is and
could use some help.
This is the command I used to enable zlib-base
-Original message-
From: Yan, Zheng
Sent: Wed 06-11-2019 14:16
Subject:Re: [ceph-users] mds crash loop
To: Karsten Nielsen ;
CC: ceph-users@ceph.io;
> On Wed, Nov 6, 2019 at 4:42 PM Karsten Nielsen wrote:
> >
> > -Original message-
> > From: Yan, Zheng
> >
-Original message-
From: Yan, Zheng
Sent: Wed 06-11-2019 14:16
Subject:Re: [ceph-users] mds crash loop
To: Karsten Nielsen ;
CC: ceph-users@ceph.io;
> On Wed, Nov 6, 2019 at 4:42 PM Karsten Nielsen wrote:
> >
> > -Original message-
> > From: Yan, Zheng
> >
My current working theory is that the mgr is getting hung up when it tries
to scrape the device metrics from the mon. The 'tell' mechanism used to
send mon-targetted commands is pretty kludgey/broken in nautilus and
earlier. It's been rewritten for octopus, but isn't worth backporting--it
ne
On Wed, Nov 6, 2019 at 5:57 PM Hermann Himmelbauer wrote:
>
> Dear Vitaliy, dear Paul,
>
> Changing the block size for "dd" makes a huge difference.
>
> However, still some things are not fully clear to me:
>
> As recommended, I tried writing / reading directly to the rbd and this
> is blazingly f
Dear Vitaliy, dear Paul,
Changing the block size for "dd" makes a huge difference.
However, still some things are not fully clear to me:
As recommended, I tried writing / reading directly to the rbd and this
is blazingly fast:
fio -ioengine=rbd -name=test -direct=1 -rw=read -bs=4M -iodepth=16
-
Well, even after restarting the MGR service the relevant log is spoiled
with this error messages:
2019-11-06 17:46:22.363 7f81ffdcc700 0 auth: could not find secret_id=3865
2019-11-06 17:46:22.363 7f81ffdcc700 0 cephx: verify_authorizer could
not get service secret for service mgr secret_id=3865
Hi,
does anybody get this error messages in MGR log?
2019-11-06 15:41:44.765 7f10db740700 0 auth: could not find secret_id=3863
2019-11-06 15:41:44.765 7f10db740700 0 cephx: verify_authorizer could
not get service secret for service mgr secret_id=3863
THX
Am 06.11.2019 um 10:43 schrieb Oliver
On Wed, Nov 6, 2019 at 4:42 PM Karsten Nielsen wrote:
>
> -Original message-
> From: Yan, Zheng
> Sent: Wed 06-11-2019 08:15
> Subject:Re: [ceph-users] mds crash loop
> To: Karsten Nielsen ;
> CC: ceph-users@ceph.io;
> > On Tue, Nov 5, 2019 at 5:29 PM Karsten Nielsen
hi oliver,
On 11/6/19 10:43 AM, Oliver Freyermuth wrote:
[…]
> Did somebody see something similar after running for a week or more with
> Nautilus on old and slow hardware?
yes, same here: significantly more mgr failovers / compaction jobs with
nautilus than with mimic … most likely due to pgs be
Hi together,
interestingly, now that the third mon is missing for almost a week (those
planned interventions always take longer than expected...),
we get mgr failovers (but without crashes).
In the mgr log, I find:
2019-11-06 07:50:05.409 7fce8a0dc700 0 client.0 ms_handle_reset on
v2:10.160.
-Original message-
From: Yan, Zheng
Sent: Wed 06-11-2019 08:15
Subject:Re: [ceph-users] mds crash loop
To: Karsten Nielsen ;
CC: ceph-users@ceph.io;
> On Tue, Nov 5, 2019 at 5:29 PM Karsten Nielsen wrote:
> >
> > Hi,
> >
> > Last week I upgraded my ceph cluster from
15 matches
Mail list logo