On Thursday, October 24, 2024 11:01:32 AM EDT Alexander Closs wrote:
> Just chiming in to say this also affected our cluster, same symptoms and a
> temporary fix of disabling the balancer. Happy to add my cluster's logs to
> the issue, though I suspect they'll look the same as Laimis' cluster.
Ple
Hi Marc,
Make sure you have a look at CrowdSec [1] for distributed protection. It's well
worth the time.
Regards,
Frédéric.
[1] https://github.com/crowdsecurity/crowdsec
De : Marc
Envoyé : jeudi 24 octobre 2024 22:52
À : Ken Dreyer
Cc: ceph-users
Objet : [cep
Call for Submission
Submission Deadline: Nov 10th, 2024 AoE
The IO500 is now accepting and encouraging submissions for the upcoming
15th semi-annual IO500 Production and Research lists, in conjunction
with SC24. We are also accepting submissions to both the Production and
Research 10 Client No
On Thu, Oct 24, 2024, 11:44 AM Alexander Closs wrote:
> Will do!
>
> > On Oct 24, 2024, at 11:41 AM, John Mulligan <
> phlogistonj...@asynchrono.us> wrote:
> >
> > On Thursday, October 24, 2024 11:01:32 AM EDT Alexander Closs wrote:
> >> Just chiming in to say this also affected our cluster, same
Is this moot if the Ceph daemon nodes are numbered in RFC1918 space or
otherwise not reachable from the internet at learge?
>
>>
>> Sorry for posting off topic, a bit to lazy to create yet another
>> account somewhere. I still need to make this upgrade to different os. I
>> have now some v
>
> Sorry for posting off topic, a bit to lazy to create yet another
> account somewhere. I still need to make this upgrade to different os. I
> have now some vms on centos9 stream. What annoys me a lot is that tcp
> wrapper support is not default added to ssh. (I am using auto fed dns
> bla
On Wed, Oct 23, 2024 at 5:12 AM Marc wrote:
>
> Sorry for posting off topic, a bit to lazy to create yet another account
> somewhere. I still need to make this upgrade to different os. I have now
> some vms on centos9 stream. What annoys me a lot is that tcp wrapper
> support is not default added
Will do!
> On Oct 24, 2024, at 11:41 AM, John Mulligan
> wrote:
>
> On Thursday, October 24, 2024 11:01:32 AM EDT Alexander Closs wrote:
>> Just chiming in to say this also affected our cluster, same symptoms and a
>> temporary fix of disabling the balancer. Happy to add my cluster's logs to
>>
> Most are from not scrubbed since end of August …
That is lucky! On an inherited Ceph instance I had got most of
them unscrubbed for 1-2 years. :-)
The usual reason for delays in scrubbing are insufficient IOPS
(both) and even insufficient bandwidth (deep scrubbing).
Scrubbing like balancing and
Hi,
there are a couple of ways to get your OSDs into "managed" state. You
can't remove the "unmanaged" service because it's unmanaged. ;-)
Just an example from a test cluster where I adopted three OSDs, now
they're unmanaged as expected:
soc9-ceph:~ # ceph orch ls osd
NAME PORTS RUNNIN
Hey all,
I had a very similar issue years back.
OSDs would take a long time starting when they were out for a while
(like a few weeks).
The counter was starting over and over again since the OSD service would
restart itself after a while.
In my case, the issue was that there was a new OSD epo
Hi Bob,
have you tried to restart the active mgr? ( sometimes mgr gets stuck and
prevents the orchestrator from working correctly ).
Regarding the orchestrator device scan: have a look into the
ceph-volume.log on the corresponding host. you will find it under
/var/log/ceph/CLUSTER-ID/ceph-volume.lo
12 matches
Mail list logo