[ceph-users] Re: Squid Manager Daemon: balancer crashing orchestrator and dashboard

2024-10-24 Thread John Mulligan
On Thursday, October 24, 2024 11:01:32 AM EDT Alexander Closs wrote: > Just chiming in to say this also affected our cluster, same symptoms and a > temporary fix of disabling the balancer. Happy to add my cluster's logs to > the issue, though I suspect they'll look the same as Laimis' cluster. Ple

[ceph-users] Re: centos9 or el9/rocky9

2024-10-24 Thread Frédéric Nass
Hi Marc, Make sure you have a look at CrowdSec [1] for distributed protection. It's well worth the time. Regards, Frédéric. [1] https://github.com/crowdsecurity/crowdsec De : Marc Envoyé : jeudi 24 octobre 2024 22:52 À : Ken Dreyer Cc: ceph-users Objet : [cep

[ceph-users] IO500 SC24 List Call for Submissions

2024-10-24 Thread IO500 Committee
Call for Submission Submission Deadline: Nov 10th, 2024 AoE The IO500 is now accepting and encouraging submissions for the upcoming 15th semi-annual IO500 Production and Research lists, in conjunction with SC24. We are also accepting submissions to both the Production and Research 10 Client No

[ceph-users] Re: Squid Manager Daemon: balancer crashing orchestrator and dashboard

2024-10-24 Thread Tyler Stachecki
On Thu, Oct 24, 2024, 11:44 AM Alexander Closs wrote: > Will do! > > > On Oct 24, 2024, at 11:41 AM, John Mulligan < > phlogistonj...@asynchrono.us> wrote: > > > > On Thursday, October 24, 2024 11:01:32 AM EDT Alexander Closs wrote: > >> Just chiming in to say this also affected our cluster, same

[ceph-users] Re: centos9 or el9/rocky9

2024-10-24 Thread Anthony D'Atri
Is this moot if the Ceph daemon nodes are numbered in RFC1918 space or otherwise not reachable from the internet at learge? > >> >> Sorry for posting off topic, a bit to lazy to create yet another >> account somewhere. I still need to make this upgrade to different os. I >> have now some v

[ceph-users] Re: centos9 or el9/rocky9

2024-10-24 Thread Marc
> > Sorry for posting off topic, a bit to lazy to create yet another > account somewhere. I still need to make this upgrade to different os. I > have now some vms on centos9 stream. What annoys me a lot is that tcp > wrapper support is not default added to ssh. (I am using auto fed dns > bla

[ceph-users] Re: centos9 or el9/rocky9

2024-10-24 Thread Ken Dreyer
On Wed, Oct 23, 2024 at 5:12 AM Marc wrote: > > Sorry for posting off topic, a bit to lazy to create yet another account > somewhere. I still need to make this upgrade to different os. I have now > some vms on centos9 stream. What annoys me a lot is that tcp wrapper > support is not default added

[ceph-users] Re: Squid Manager Daemon: balancer crashing orchestrator and dashboard

2024-10-24 Thread Alexander Closs
Will do! > On Oct 24, 2024, at 11:41 AM, John Mulligan > wrote: > > On Thursday, October 24, 2024 11:01:32 AM EDT Alexander Closs wrote: >> Just chiming in to say this also affected our cluster, same symptoms and a >> temporary fix of disabling the balancer. Happy to add my cluster's logs to >>

[ceph-users] Re: pgs not deep-scrubbed in time and pgs not scrubbed in time

2024-10-24 Thread Peter Grandi
> Most are from not scrubbed since end of August … That is lucky! On an inherited Ceph instance I had got most of them unscrubbed for 1-2 years. :-) The usual reason for delays in scrubbing are insufficient IOPS (both) and even insufficient bandwidth (deep scrubbing). Scrubbing like balancing and

[ceph-users] Re: Ceph orchestrator not refreshing device list

2024-10-24 Thread Eugen Block
Hi, there are a couple of ways to get your OSDs into "managed" state. You can't remove the "unmanaged" service because it's unmanaged. ;-) Just an example from a test cluster where I adopted three OSDs, now they're unmanaged as expected: soc9-ceph:~ # ceph orch ls osd NAME PORTS RUNNIN

[ceph-users] Re: failed to load OSD map for epoch 2898146, got 0 bytes

2024-10-24 Thread Alex Walender
Hey all, I had a very similar issue years back. OSDs would take a long time starting when they were out for a while (like a few weeks). The counter was starting over and over again since the OSD service would restart itself after a while. In my case, the issue was that there was a new OSD epo

[ceph-users] Re: Ceph orchestrator not refreshing device list

2024-10-24 Thread Tobias Fischer
Hi Bob, have you tried to restart the active mgr? ( sometimes mgr gets stuck and prevents the orchestrator from working correctly ). Regarding the orchestrator device scan: have a look into the ceph-volume.log on the corresponding host. you will find it under /var/log/ceph/CLUSTER-ID/ceph-volume.lo