There is a certain virtue in using a firewall appliance for front-line
protection. I think fail2ban could add IPs to its block list.
An advantage of this is that you don't have to remember what all the
internal servers are to firewall them individually.
Certainly one could update firewall-cmd via
Think I’ve asked this before but — has anyone attempted to use a cephadm type
install with Debian Nobel running on Arm64? Have tried both Reef and Squid,
neither gets very far. Do I need to file a request for it?
myhost-01:~/ceph$ uname -a
Linux cube-man-01 6.8.0-1010-raspi #11-Ubuntu SMP
That is super useful. Thank you so much for sharing! :)
Kevin
From: Frank Schilder
Sent: Friday, October 25, 2024 8:03 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: pgs not deep-scrubbed in time and pgs not scrubbed in
time
Check twice before you
- Le 25 Oct 24, à 18:21, Frédéric Nass frederic.n...@univ-lorraine.fr a
écrit :
> - Le 25 Oct 24, à 16:31, Bob Gibson r...@oicr.on.ca a écrit :
>
>> HI Frédéric,
>>
>>> I think this message shows up as this very specific post adoption 'osd'
>>> service
>>> has already been marked as
On Fri, Oct 25, 2024 at 11:03 AM Friedrich Weber wrote:
>
> Hi,
>
> Some of our Proxmox VE users have noticed that a large fstrim inside a
> QEMU/KVM guest does not free up as much space as expected on the backing
> RBD image -- if the image is mapped on the host via KRBD and passed to
> QEMU as a
- Le 25 Oct 24, à 16:31, Bob Gibson r...@oicr.on.ca a écrit :
> HI Frédéric,
>
>> I think this message shows up as this very specific post adoption 'osd'
>> service
>> has already been marked as 'deleted'. Maybe when you ran the command for the
>> first time.
>> The only reason it still sh
Experiencing the same problem.
Disabling balancer helps.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi, you might want to take a look here:
https://github.com/frans42/ceph-goodies/blob/main/doc/TuningScrub.md
Don't set max_scrubs > 1 on HDD OSDs, you will almost certainly regret it like
I did.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
Hello,
ceph version is 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351)
nautilus (stable)
and the rbd info command is also slow, some times it needs 6 seconds. rbd
snap create command takes 17 seconds. There is another cluster with the
same configuration it only takes less than 1 second.
cr
Hi Tobi,
Thanks for your response. While I hadn’t tried restarting the active mgr, I did
effectively accomplish the same result by failing it out with `ceph mgr fail`,
thereby starting a new mgr process in another container. I’ve since tried
restarting the active mgr, but it didn’t make any dif
HI,
This is my cluster:
cluster:
id: c300532c-51fa-11ec-9a41-0050569c3b55
health: HEALTH_WARN
Degraded data redundancy: 2062374/1331064781 objects degraded
(0.155%), 278 pgs degraded, 40 pgs undersized
2497 pgs not deep-scrubbed in time
2497 pgs
Thanks Eugen. Now that you mention it, it was rather silly of me to attempt to
use the orchestrator to remove an unmanaged resource :-)
Your example for managing devices is very similar to what I’m trying to do, and
what has been working for us on other clusters. I’m using a separate osd spec
p
HI Frédéric,
> I think this message shows up as this very specific post adoption 'osd'
> service has already been marked as 'deleted'. Maybe when you ran the command
> for the first time.
> The only reason it still shows up on 'ceph orch ls' is that 95 OSDs are still
> referencing this service
to unsubscribe 退订
At 2024-10-25 15:57:03, "Friedrich Weber" wrote:
>Hi,
>
>Some of our Proxmox VE users have noticed that a large fstrim inside a
>QEMU/KVM guest does not free up as much space as expected on the backing
>RBD image -- if the image is mapped on the host via KRBD and passed to
>QEMU
Hi,
Some of our Proxmox VE users have noticed that a large fstrim inside a
QEMU/KVM guest does not free up as much space as expected on the backing
RBD image -- if the image is mapped on the host via KRBD and passed to
QEMU as a block device (checked via `rbd du --exact`). If the image is
attached
- Le 23 Oct 24, à 20:14, Bob Gibson r...@oicr.on.ca a écrit :
> Sorry to resurrect this thread, but while I was able to get the cluster
> healthy
> again by manually creating the osd, I'm still unable to manage osds using the
> orchestrator.
>
> The orchestrator is generally working, but I
Just chiming in to say this also affected our cluster, same symptoms and a
temporary fix of disabling the balancer. Happy to add my cluster's logs to the
issue, though I suspect they'll look the same as Laimis' cluster.
-Alex
MIT CSAIL
___
ceph-users
17 matches
Mail list logo