[ceph-users] Re: Test Cluster / Performance Degradation After Adding Private Network

2025-07-18 Thread David Rivera
This sounds like a network configuration issue to me. The fact that you mention ssh'ing into the nodes or running apt get is slow sounds like DNS timeouts. Make sure that you only have IP address and subnet configured on your cluster network interface (no gateway or DNS). On Fri, Jul 18, 2025, 16:

[ceph-users] Re: EC Profiles & DR

2023-12-05 Thread David Rivera
First problem here is you are using crush-failure-domain=osd when you should use crush-failure-domain=host. With three hosts, you should use k=2, m=1; this is not recommended in production environment. On Mon, Dec 4, 2023, 23:26 duluxoz wrote: > Hi All, > > Looking for some help/explanation aro

[ceph-users] Re: Newer linux kernel cephfs clients is more trouble?

2022-05-11 Thread David Rivera
Hi, My experience is similar, I was also using elrepo kernels on CentOS 8. Kernels 5.14+ were causing problems, I had to go back to 5.11. I did not test 5.12-5.13. I did not have enough time to narrow down the system instability to Ceph. Currently, I'm using the included Rocky Linux 8 kernels (4.1

[ceph-users] Re: zap an osd and it appears again

2022-04-26 Thread David Rivera
Hi, We currently remove drives without --zap if we do not want them to be automatically re-added. After full removal from the cluster or on addition of new drives we set `ceph orch pause` to do be able to work on the drives without ceph interfering. To add the drives we resume the background orche

[ceph-users] Re: Cephfs + inotify

2021-10-08 Thread David Rivera
I see. This is true, I did monitor for changes on all clients involved. On Fri, Oct 8, 2021, 12:27 Daniel Poelzleithner wrote: > On 08/10/2021 21:19, David Rivera wrote: > > > I've used inotify against a kernel mount a few months back. Worked fine > for > > me if I re

[ceph-users] Re: Cephfs + inotify

2021-10-08 Thread David Rivera
I've used inotify against a kernel mount a few months back. Worked fine for me if I recall correctly. On Fri, Oct 8, 2021, 08:20 Sean wrote: > I don’t think this is possible, since CephFS is a network mounted > filesystem. The inotify feature requires the kernel to be aware of file > system cha

[ceph-users] Re: ceph fs authorization changed?

2021-09-01 Thread David Rivera
Hi Alexander, I have a ceph 16.2.5 cluster and it is working correctly with cephfs tag. Is your cephfs filesystem named "filesystem"? You can grab the name using `ceph fs ls`. -David On Wed, Sep 1, 2021, 00:32 Kiotsoukis, Alexander wrote: > Sorry, correction. > > I now have to use > > ceph a

[ceph-users] Re: fixing future rctimes

2021-05-07 Thread David Rivera
Hi guys, Did anyone ever figure out how to fix rctime? I had a directory that was robocopied from a windows host that contained files with modified times in the future. Now the directory tree up to the root will not update rctime. Thanks, David ___ ceph

[ceph-users] Re: Issues upgrading Ceph from 15.2.8 to 15.2.10

2021-03-24 Thread David Rivera
Hi Julian, You are most likely running into this same issue: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/QNR2XRZPEYKANMUJLI4KYQGWAQEDJNSX/ It is podman 2.2 related. I ran into this using CentOS 8.3 and decided to move to CentOS Stream to be able to upgrade the cluster. David