[ceph-users] Re: Help with HA NFS

2025-04-23 Thread tobias tempel
Hej, i've been struggling with CEPH and kerberized NFS4 and HA for a while but now seem to have a working solution with CEPH 18.2.4 ... right now in stress testing phase. Manually set up ganesha servers - no cephadm, no containers. HA is done with keepalived. The current solution works with o

[ceph-users] Re: ceph orch upgrade tries to pull latest?

2025-01-10 Thread tobias tempel
as From: "Stephan Hohn" To: "Tobias Tempel" Cc: "Adam King" , "ceph-users" Sent: Thursday, 9 January, 2025 12:31:43 Subject: [ceph-users] Re: ceph orch upgrade tries to pull latest? Hi Tobias, have you tried to set your privat registry before startin

[ceph-users] Re: ceph orch upgrade tries to pull latest?

2025-01-09 Thread tobias tempel
-base:latest-master-devel -e NODE_NAME=monitor0x -e CEPH_USE_RANDOM_NONCE=1 docker.io/ceph/daemon-base:latest-master-devel -c %u %g /var/lib/ceph: Trying to pull docker.io/ceph/daemon-base:latest-master-devel... Error: initializing source docker://ceph/daemon-base:latest-master-devel: pinging contai

[ceph-users] ceph orch upgrade tries to pull latest?

2025-01-08 Thread tobias tempel
Dear all, i'm trying to cephadm-upgrade in an airgapped environment from 18.2.2 to 18.2.4 ... yet to no avail. local image registry is a harbor instance, I start the upgrade process with ceph orch upgrade start --image harborregistry/quay.io/ceph/ceph:v18.2.4 and status looks good ceph orc

[ceph-users] Re: Correct way to replace working OSD disk keeping the same OSD ID

2024-12-11 Thread tobias tempel
hi, in case i care to keep an osd id (up to now) i do ... ceph osd destroy --yes-i-really-mean-it ... replace disk ... [ ceph-volume lvm zap --destroy /dev/ ] ceph-volume lvm prepare --bluestore --osd-id --data /dev/ [ --block.db /dev/ ] [ --block.wal /dev/ ] ... retrieve i.e. with cep

[ceph-users] Re: purging already destroyed OSD leads to degraded and misplaced objects?

2024-04-04 Thread tobias tempel
- i'll do as you suggest. Thanks again, cheers, toBias From: "Boris" To: "Tobias Tempel" Cc: "ceph-users" Sent: Thursday, 4 April, 2024 11:02:09 Subject: Re: [ceph-users] purging already destroyed OSD leads to degraded and misplaced objects? Hi To

[ceph-users] purging already destroyed OSD leads to degraded and misplaced objects?

2024-04-04 Thread tobias tempel
eal problem, as recovery works fine ... but perhaps it indicates some other issue. All of that takes place at Pacific 16.2.14 AlmaLinux 8.9, yes i know, there's some work to do. Perhaps you can give me a hint, where to look, for an explanation? Thank you cheers, toBias -- Tobias Tempe