Hey Joao,
thanks for the pointer! Do you have a timeline for the release of
v12.2.2?
Best,
Nico
--
Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/li
On 10/18/2017 10:38 AM, Nico Schottelius wrote:
Hello Joao,
thanks for coming back!
I copied the log of the crashing monitor to
http://www.nico.schottelius.org/cephmonlog-2017-10-08-v2.xz
The monitor is crashing as part of bug #21300
http://tracker.ceph.com/issues/21300
And the fix is c
On 10/18/2017 10:38 AM, Nico Schottelius wrote:
Hello Joao,
thanks for coming back!
I copied the log of the crashing monitor to
http://www.nico.schottelius.org/cephmonlog-2017-10-08-v2.xz
Can I somehow get access to the logs of the other monitors, without
restarting them?
If you mean incre
Hello Joao,
thanks for coming back!
I copied the log of the crashing monitor to
http://www.nico.schottelius.org/cephmonlog-2017-10-08-v2.xz
Can I somehow get access to the logs of the other monitors, without
restarting them?
I would like to not stop them, as currently we are running with 2/3
Hi Nico,
I'm sorry I forgot about your issue. Crazy few weeks.
I checked the log you initially sent to the list, but it only contains
the log from one of the monitors, and it's from the one synchronizing.
This monitor is not stuck however - synchronizing is progressing, albeit
slowly.
Can y
Hello everyone,
is there any solution in sight for this problem? Currently our cluster
is stuck with a 2 monitor configuration, as everytime we restart the one
server2, it crashes after some minutes (and in between the cluster is stuck).
Should we consider downgrading to kraken to fix that probl
Hey guys,
Does this mean we have to do additional when upgrading from Jewel 10.2.10 to
luminous 12.2.1?
- Mehmet
Am 9. Oktober 2017 04:02:14 MESZ schrieb kefu chai :
>On Mon, Oct 9, 2017 at 8:07 AM, Joao Eduardo Luis wrote:
>> This looks a lot like a bug I fixed a week or so ago, but for which
Good morning Joao,
thanks for your feedback! We do actually have three managers running:
cluster:
id: 26c0c5a8-d7ce-49ac-b5a7-bfd9d0ba81ab
health: HEALTH_WARN
1/3 mons down, quorum server5,server3
services:
mon: 3 daemons, quorum server5,server3, out of quorum: s
On Mon, Oct 9, 2017 at 8:07 AM, Joao Eduardo Luis wrote:
> This looks a lot like a bug I fixed a week or so ago, but for which I
> currently don't recall the ticket off the top of my head. It was basically a
http://tracker.ceph.com/issues/21300
> crash each time a "ceph osd df" was called, if a
This looks a lot like a bug I fixed a week or so ago, but for which I currently don't recall the ticket off the top of my head. It was basically a crash each time a "ceph osd df" was called, if a mgr was not available after having set the luminous osd require flag. I will check the log in the morni
10 matches
Mail list logo