[ceph-users] Re: centos9 or el9/rocky9

2024-10-25 Thread Tim Holloway
There is a certain virtue in using a firewall appliance for front-line protection. I think fail2ban could add IPs to its block list. An advantage of this is that you don't have to remember what all the internal servers are to firewall them individually. Certainly one could update firewall-cmd via

[ceph-users] Install on Debian Nobel on Arm64?

2024-10-25 Thread Daniel Brown
Think I’ve asked this before but — has anyone attempted to use a cephadm type install with Debian Nobel running on Arm64? Have tried both Reef and Squid, neither gets very far. Do I need to file a request for it? myhost-01:~/ceph$ uname -a Linux cube-man-01 6.8.0-1010-raspi #11-Ubuntu SMP

[ceph-users] Re: pgs not deep-scrubbed in time and pgs not scrubbed in time

2024-10-25 Thread Fox, Kevin M
That is super useful. Thank you so much for sharing! :) Kevin From: Frank Schilder Sent: Friday, October 25, 2024 8:03 AM To: ceph-users@ceph.io Subject: [ceph-users] Re: pgs not deep-scrubbed in time and pgs not scrubbed in time Check twice before you

[ceph-users] Re: Ceph orchestrator not refreshing device list

2024-10-25 Thread Frédéric Nass
- Le 25 Oct 24, à 18:21, Frédéric Nass frederic.n...@univ-lorraine.fr a écrit : > - Le 25 Oct 24, à 16:31, Bob Gibson r...@oicr.on.ca a écrit : > >> HI Frédéric, >> >>> I think this message shows up as this very specific post adoption 'osd' >>> service >>> has already been marked as

[ceph-users] Re: KRBD: downside of setting alloc_size=4M for discard alignment?

2024-10-25 Thread Ilya Dryomov
On Fri, Oct 25, 2024 at 11:03 AM Friedrich Weber wrote: > > Hi, > > Some of our Proxmox VE users have noticed that a large fstrim inside a > QEMU/KVM guest does not free up as much space as expected on the backing > RBD image -- if the image is mapped on the host via KRBD and passed to > QEMU as a

[ceph-users] Re: Ceph orchestrator not refreshing device list

2024-10-25 Thread Frédéric Nass
- Le 25 Oct 24, à 16:31, Bob Gibson r...@oicr.on.ca a écrit : > HI Frédéric, > >> I think this message shows up as this very specific post adoption 'osd' >> service >> has already been marked as 'deleted'. Maybe when you ran the command for the >> first time. >> The only reason it still sh

[ceph-users] Re: Squid Manager Daemon: balancer crashing orchestrator and dashboard

2024-10-25 Thread Kristaps Čudars
Experiencing the same problem. Disabling balancer helps. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: pgs not deep-scrubbed in time and pgs not scrubbed in time

2024-10-25 Thread Frank Schilder
Hi, you might want to take a look here: https://github.com/frans42/ceph-goodies/blob/main/doc/TuningScrub.md Don't set max_scrubs > 1 on HDD OSDs, you will almost certainly regret it like I did. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 ___

[ceph-users] The ceph monitor crashes every few days

2024-10-25 Thread 李明
Hello, ceph version is 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable) and the rbd info command is also slow, some times it needs 6 seconds. rbd snap create command takes 17 seconds. There is another cluster with the same configuration it only takes less than 1 second. cr

[ceph-users] Re: Ceph orchestrator not refreshing device list

2024-10-25 Thread Bob Gibson
Hi Tobi, Thanks for your response. While I hadn’t tried restarting the active mgr, I did effectively accomplish the same result by failing it out with `ceph mgr fail`, thereby starting a new mgr process in another container. I’ve since tried restarting the active mgr, but it didn’t make any dif

[ceph-users] no recovery running

2024-10-25 Thread Joffrey
HI, This is my cluster: cluster: id: c300532c-51fa-11ec-9a41-0050569c3b55 health: HEALTH_WARN Degraded data redundancy: 2062374/1331064781 objects degraded (0.155%), 278 pgs degraded, 40 pgs undersized 2497 pgs not deep-scrubbed in time 2497 pgs

[ceph-users] Re: Ceph orchestrator not refreshing device list

2024-10-25 Thread Bob Gibson
Thanks Eugen. Now that you mention it, it was rather silly of me to attempt to use the orchestrator to remove an unmanaged resource :-) Your example for managing devices is very similar to what I’m trying to do, and what has been working for us on other clusters. I’m using a separate osd spec p

[ceph-users] Re: Ceph orchestrator not refreshing device list

2024-10-25 Thread Bob Gibson
HI Frédéric, > I think this message shows up as this very specific post adoption 'osd' > service has already been marked as 'deleted'. Maybe when you ran the command > for the first time. > The only reason it still shows up on 'ceph orch ls' is that 95 OSDs are still > referencing this service

[ceph-users] Re: KRBD: downside of setting alloc_size=4M for discard alignment?

2024-10-25 Thread 韩云林
to unsubscribe 退订 At 2024-10-25 15:57:03, "Friedrich Weber" wrote: >Hi, > >Some of our Proxmox VE users have noticed that a large fstrim inside a >QEMU/KVM guest does not free up as much space as expected on the backing >RBD image -- if the image is mapped on the host via KRBD and passed to >QEMU

[ceph-users] KRBD: downside of setting alloc_size=4M for discard alignment?

2024-10-25 Thread Friedrich Weber
Hi, Some of our Proxmox VE users have noticed that a large fstrim inside a QEMU/KVM guest does not free up as much space as expected on the backing RBD image -- if the image is mapped on the host via KRBD and passed to QEMU as a block device (checked via `rbd du --exact`). If the image is attached

[ceph-users] Re: Ceph orchestrator not refreshing device list

2024-10-25 Thread Frédéric Nass
- Le 23 Oct 24, à 20:14, Bob Gibson r...@oicr.on.ca a écrit : > Sorry to resurrect this thread, but while I was able to get the cluster > healthy > again by manually creating the osd, I'm still unable to manage osds using the > orchestrator. > > The orchestrator is generally working, but I

[ceph-users] Re: Squid Manager Daemon: balancer crashing orchestrator and dashboard

2024-10-25 Thread Alexander Closs
Just chiming in to say this also affected our cluster, same symptoms and a temporary fix of disabling the balancer. Happy to add my cluster's logs to the issue, though I suspect they'll look the same as Laimis' cluster. -Alex MIT CSAIL ___ ceph-users