Hello,
the errors are not resolved. Here is what I tried so far, without luck:
I have added a sixth monitor (ceph-mon06), then I deleted the first one
(ceph-mon01)
The mon IDs shift back (mon02 was ID1, now is 0 and so on...)
This is the actual monmap:
0: 192.168.50.21:6789/0 mon.ceph-mon02
1: 192
Hello,
Do you see the cause of the logged errors?
I can't find any documentation about that, so I'm stuck.
I really need a help.
Thanks everybody
Marco
Il giorno ven 7 dic 2018, 17:30 Marco Aroldi ha
scritto:
> Thanks Greg,
> Yes, I'm using CephFS and RGW (mainly CephFS)
> The files are still a
Thanks Greg,
Yes, I'm using CephFS and RGW (mainly CephFS)
The files are still accessible and users doesn't report any problem.
Here is the output of ceph -s
ceph -s
cluster:
id:
health: HEALTH_OK
services:
mon: 5 daemons, quorum
ceph-mon01,ceph-mon02,ceph-mon03,ceph-mon04,ce
Well, it looks like you have different data in the MDSMap across your
monitors. That's not good on its face, but maybe there are extenuating
circumstances. Do you actually use CephFS, or just RBD/RGW? What's the
full output of "ceph -s"?
-Greg
On Thu, Dec 6, 2018 at 1:39 PM Marco Aroldi wrote:
>
Sorry about this, I hate "to bump" a thread, but...
Anyone has faced this situation?
There is a procedure to follow?
Thanks
Marco
Il giorno gio 8 nov 2018, 10:54 Marco Aroldi ha
scritto:
> Hello,
> Since upgrade from Jewel to Luminous 12.2.8, in the logs are reported some
> errors related to "s
Hello,
Since upgrade from Jewel to Luminous 12.2.8, in the logs are reported some
errors related to "scrub mismatch", every day at the same time.
I have 5 mon (from mon.0 to mon.4) and I need help to indentify and recover
from this problem.
This is the log:
2018-11-07 15:13:53.808128 [ERR] mon.4