16 oktober 2024 17:24
Aan: Dominique Ramaekers
CC: ceph-users@ceph.io
Onderwerp: Re: [ceph-users] Re: Ubuntu 24.02 LTS Ceph status warning
Is apparmor configured differently on those hosts? Or is it running only on
the misbehaving host?
Zitat von Dominique Ramaekers :
> 'ceph config g
a
bug report on Ubuntu.
@Eugen and @David, thanks for the input!
> -Oorspronkelijk bericht-
> Van: Eugen Block
> Verzonden: woensdag 16 oktober 2024 17:24
> Aan: Dominique Ramaekers
> CC: ceph-users@ceph.io
> Onderwerp: Re: [ceph-users] Re: Ubuntu 24.02 LTS Ceph status warning
&g
https://bugs.launchpad.net/ubuntu/+source/libpod/+bug/2040483
https://bugs.launchpad.net/ubuntu/+source/containerd-app/+bug/2065423
I wonder if you're running into fallout from the above bug. I believe a fix
should be rolling out soon, according to those bugs. We ran into a multitude of
seemingl
ker images aren't automatically managed by ceph?
Can I fix this, or do I have to pull the correct images and remove
the wrong ones myself?
-Oorspronkelijk bericht-
Van: Eugen Block
Verzonden: vrijdag 11 oktober 2024 13:03
Aan: ceph-users@ceph.io
Onderwerp: [ceph-users] Re: Ubu
ker images aren't automatically managed by ceph?
Can I fix this, or do I have to pull the correct images and remove
the wrong ones myself?
-Oorspronkelijk bericht-
Van: Eugen Block
Verzonden: vrijdag 11 oktober 2024 13:03
Aan: ceph-users@ceph.io
Onderwerp: [ceph-users] Re: Ubu
v17 9cea3956c04b 18 months ago
> 1.16GB
> quay.io/prometheus/node-exporter v1.5.00da6a335fe13 22 months ago
> 22.5MB
> quay.io/prometheus/node-exporter v1.3.1 1dbe0e931976 2 years ago
> 20.9MB
>
> I pulled on hvs004 the v19 tagged image and my cep
pulled on hvs004 the v19 tagged image and my cephadm shell ceph -v gave the
correct version.
It seems my docker images aren't automatically managed by ceph?
Can I fix this, or do I have to pull the correct images and remove the wrong
ones myself?
> -Oorspronkelijk bericht-----
>
I don't think the warning is related to a specific ceph version. The
orchestrator uses the default image anyway, you can get it via:
ceph config get mgr container_image
'ceph health detail' should reveal which host or daemons misbehaves. I
would then look into cephadm.log on that host to fin