Am 06.11.2024 um 17:58 schrieb Tim Holloway:
2. Check your selunix audit log to make sure nothing's being blocked.
I disabled apparmor completely and
ceph orch daemon add osd ceph-x:/dev/sdy
worked. Perhaps it is worth investigating more closely which Apparmor
rule is blocking the ‘add o
Hi,
I try to add OSDs to my new Cluster (Ubuntu 24.04 + podman), Four
devices are listed as available:
root@ceph-1:~# ceph-volume inventory
Device Path Size Device nodes rotates available
Model name
/dev/nvme0n1 1.82 TB nvme0n1 False True
Am 12.07.2024 um 10:57 schrieb Robert Sander:
... I would suggest to use Ubuntu 22.04 LTS as the base operating
system. You can use cephadm on top of that without issues.
yes, that's right. But I already upgraded my systems to 24.04, may be to
early, my fault. Currently, it's all testing and
Hi,
thanks for your response.
Am 12.07.2024 um 10:24 schrieb Stefan Kooman:
... Note: just to be sure, you do _NOT_ want to use Ceph from Ubuntu
24.04 repositories. The 19.2.x release is not out yet (still RC) and
this is a Ubuntu released version (why they ship this in a Ubuntu LTS
version in
Hi,
Am 11.07.2024 um 14:20 schrieb John Mulligan:
...
as far as I know, we still have an issue
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2063456
with ceph on 24.04. I tried the offered fix, but was still unable to
establish a running cluster (may be my fault, I'm still a newbie to
Hi,
Am 30.05.2024 um 08:58 schrieb Robert Sander:
... Please show the output of ceph ...
sorry for PM. Short update, I started from scratch with new Cluster,
Reef instead of Quincy and this time I used a RBD instead of Filesystem
for the first test and the rebalancing took place as expected
Hi,
I have been curiuos about ceph for long time and now I started to
experiment to find out, how it works. The idea I like most is, that ceph
can provide growing storage without the need to move from storage x to
storage y on consumer side.
I started with a 3 node cluster where each node g