Hej,
i've been struggling with CEPH and kerberized NFS4 and HA for a while but now
seem to have a working solution with CEPH 18.2.4 ... right now in stress
testing phase.
Manually set up ganesha servers - no cephadm, no containers. HA is done with
keepalived.
The current solution works with o
as
From: "Stephan Hohn"
To: "Tobias Tempel"
Cc: "Adam King" , "ceph-users"
Sent: Thursday, 9 January, 2025 12:31:43
Subject: [ceph-users] Re: ceph orch upgrade tries to pull latest?
Hi Tobias,
have you tried to set your privat registry before startin
-base:latest-master-devel -e
NODE_NAME=monitor0x -e CEPH_USE_RANDOM_NONCE=1
docker.io/ceph/daemon-base:latest-master-devel -c %u %g /var/lib/ceph: Trying
to pull docker.io/ceph/daemon-base:latest-master-devel...
Error: initializing source docker://ceph/daemon-base:latest-master-devel:
pinging contai
Dear all,
i'm trying to cephadm-upgrade in an airgapped environment from 18.2.2 to 18.2.4
... yet to no avail.
local image registry is a harbor instance, I start the upgrade process with
ceph orch upgrade start --image harborregistry/quay.io/ceph/ceph:v18.2.4
and status looks good
ceph orc
hi,
in case i care to keep an osd id (up to now) i do
...
ceph osd destroy --yes-i-really-mean-it
... replace disk ...
[ ceph-volume lvm zap --destroy /dev/ ]
ceph-volume lvm prepare --bluestore --osd-id --data /dev/ [
--block.db /dev/ ] [ --block.wal
/dev/ ]
... retrieve i.e. with cep
- i'll do as you suggest.
Thanks again,
cheers, toBias
From: "Boris"
To: "Tobias Tempel"
Cc: "ceph-users"
Sent: Thursday, 4 April, 2024 11:02:09
Subject: Re: [ceph-users] purging already destroyed OSD leads to degraded and
misplaced objects?
Hi To
eal problem, as recovery works fine ... but
perhaps it indicates some other issue.
All of that takes place at Pacific 16.2.14 AlmaLinux 8.9, yes i know, there's
some work to do.
Perhaps you can give me a hint, where to look, for an explanation?
Thank you
cheers, toBias
--
Tobias Tempe