Hi Laimis,
Thanks for reporting. Can you please raise a tracker ticket and attach the
mgr and mon logs? Can you bump up the logging level in the balancer module
with `ceph config set mgr mgr/balancer/ debug` and the mon logs with `ceph
config set mon.* debug_mon 20`?
Thanks,
Laura
On Fri, Oct 18
Hello community,
We are facing one issue after migrating from Reef 18.4.2 to Squid 19.2.0 with
the Ceph manager daemon and were wondering if anyone has already faced this or
could guide us where to look further. When turning on balancer (upmap mode) it
hangs our mgr completely most of the time a
My knowledge of AppArmor and Ubuntu is too limited to give a qualified
answer, I'll leave it for others to respond.
Zitat von Dominique Ramaekers :
Apparmor profiles for Ceph are apparently very limited on Ubuntu.
On hvs001 (not misbehaving host) with services osd's, mgr, prometheus, mon
- /
Hi Malte,
so I would only suggest to bring up a new MGR, issue a failover to
that MGR and see if you get the orchestrator to work again.
It should suffice to change the container_image in the unit.run file
(/var/lib/ceph/{FSID}/mgr.{MGR}/unit.run):
CONTAINER_IMAGE={NEWER IMAGE}
So stop one
Hi Frank,
> Does this setting affect PG removal only or is it affecting other operations
> as well? Essentially: can I leave it at its current value or should I reset
> it to default?
Only PG removal, which is why we set it high enough that it
effectively disables that process.
Josh
__
What release are you running where ceph-deploy still works?
I get what you're saying, but really you should get used to OSD IDs being
arbitrary.
- ``ceph osd ls-tree `` will output a list of OSD ids under
the given CRUSH name (like a host or rack name). This is useful
for applyi
Hi Joshua,
thanks for this reply. It is ceph fs with comparably large spinners and a
significant percentage of small files. Thanks for pointing out the config
option. There were still a few PGs left on the disks and I had time to try a
few settings. Not sure if the results are really representa
I’m on a mobile phone right now, I can’t go into much detail right now.
But I don’t think it’s necessary to rebuild an entire node, just a
mgr. otherwise you risk cluster integrity if you redeploy a mon as
well with a newer image. I’ll respond later in more detail.
Zitat von Malte Stroem :
Well, thank you, Eugen. That is what I planned to do.
Rebuild the broken node and start a MON and a MGR there with the latest
images. Then I will stop the other MGRs and have a look if it's working.
But I would like to know if I could replace the cephadm on one running
node, stop the MGR and
Apparmor profiles for Ceph are apparently very limited on Ubuntu.
On hvs001 (not misbehaving host) with services osd's, mgr, prometheus, mon
- /bin/prometheus (93021) docker-default
- /usr/bin/ceph-mgr (3574755) docker-default
- /bin/alertmanager (3578797) docker-default
- /usr/bin/ceph-mds (40513
Okay, then I misinterpreted your former statement:
I think there are entries of the OSDs from the broken node we removed.
So the stack trace in the log points to the osd_remove_queue, but I
don't understand why it's empty. Is there still some OSD removal going
on or something? Did you pas
Hello Eugen,
thanks a lot. However:
ceph config-key get mgr/cephadm/osd_remove_queue
is empty!
Damn.
So should I get a new cephadm with the diff included?
Best,
Malte
On 17.10.24 23:48, Eugen Block wrote:
Save the current output to a file:
ceph config-key get mgr/cephadm/osd_remove_queue
Don't think that the root cause has been found. I disabled versioning as I
have to manually remove expired objects using s3 client.
On Thu, 17 Oct 2024 at 17:50, Reid Guyett wrote:
> Hello,
>
> I am experiencing an issue where it seems all lifecycles are showing either
> PROCESSING or UNINITIAL.
13 matches
Mail list logo