Den tors 13 jan. 2022 kl 08:38 skrev Szabo, Istvan (Agoda)
:
> Hi,
> I can see a lot of message regarding the rotating key, but not sure this is
> the root cause.
> 2022-01-13 03:21:57.156 7fe7e085e700 -1 monclient: _check_auth_rotating
> possible clock skew, rotating keys expired way too early (
Am 13.01.22 um 09:07 schrieb Szabo, Istvan (Agoda):
> Yes, it's enabled, just died again, this is in the log now:
We suspect that is has sth to do with this backport which got merged in 14.2.22:
https://tracker.ceph.com/issues/48713
It was intended to fix the clock skew issue, but in fact we
Am 13.01.22 um 09:19 schrieb Szabo, Istvan (Agoda):
> But in your case the election is successful to the other mgr, am I correct?
> So the dash always up for you? Not sure for me why not, maybe I need to
> disable it really :/
I can't tell you why its not failing over. But if you don't track th
I am also using this repository with Nautilus 14.2.22.
> Asking another way -- if we stop building nfs-ganesha and distributing
> them on download.ceph.com -- what would break?
But I did not see updates there for a long time. Why would you stop building
them, you are using them also in the cont
Hello to all.
I have Proxmox cluster with 7 node.
Storage for VM disk and others pool data is on ceph version 15.2.15
(4b7a17f73998a0b4d9bd233cda1db482107e5908) octopus (stable)
On pve-7 I have 10 OSD and for test I want to remove 2 osd from this node.
I write some steps command how I remove
> I have Proxmox cluster with 7 node.
>
> Storage for VM disk and others pool data is on ceph version 15.2.15
> (4b7a17f73998a0b4d9bd233cda1db482107e5908) octopus (stable)
>
> On pve-7 I have 10 OSD and for test I want to remove 2 osd from this node.
>
> I write some steps command how I remove this
Thank you,
Not my Installation is not very old is fresh installation less than six
months.
So I follow my written step to remove osd, after the test I can send output.
13.01.2022 12:54, Janne Johansson пишет:
I have Proxmox cluster with 7 node.
Storage for VM disk and others pool data is o
Hi everyone,
This month's Ceph User + Dev Monthly meetup is next Thursday, January
20, 2022, 15:00-16:00 UTC. This time we would like to hear what users
have to say about four themes of Ceph: Quality, Usability, Performance
and Ecosystem. Any kind of feedback is welcome! Please feel free to
add mo
https://github.com/ceph/ceph/pull/44228
I don't think this has landed in a pacific backport yet, but probably will
soon!
s
On Tue, Jan 11, 2022 at 6:29 PM Bryan Stillwell
wrote:
> I recently had a server (named aladdin) that was part of my home cluster
> die. It held 6 out of 32 OSDs, so to p