[ceph-users] Re: Setting up Hashicorp Vault for Encryption with Ceph

2024-04-16 Thread Stefan Kooman
On 15-04-2024 16:43, Michael Worsham wrote: Is there a how-to document available on how to setup Hashicorp's Vault for Ceph, preferably in a HA state? See [1] on how to do this on kubernetes. AFAIK there is no documentation / integration on using Vault with Cephadm / packages. Due to som

[ceph-users] Re: Have a problem with haproxy/keepalived/ganesha/docker

2024-04-16 Thread Eugen Block
Hi, I believe I can confirm your suspicion, I have a test cluster on Reef 18.2.1 and deployed nfs without HAProxy but with keepalived [1]. Stopping the active NFS daemon doesn't trigger anything, the MGR notices that it's stopped at some point, but nothing else seems to happen. I didn't wai

[ceph-users] Re: [EXTERN] cephFS on CentOS7

2024-04-16 Thread Dietmar Rieder
Hello, we a run CentOS 7.9 client to access cephfs on a Ceph Reef (18.2.2) Cluster and it works just fine using the kernel client that comes with CentOS 7.9 + updates. Best Dietmar On 4/15/24 16:17, Dario Graña wrote: Hello everyone! We deployed a platform with Ceph Quincy and now we nee

[ceph-users] Re: Have a problem with haproxy/keepalived/ganesha/docker

2024-04-16 Thread Robert Sander
Hi, On 16.04.24 10:49, Eugen Block wrote: I believe I can confirm your suspicion, I have a test cluster on Reef 18.2.1 and deployed nfs without HAProxy but with keepalived [1]. Stopping the active NFS daemon doesn't trigger anything, the MGR notices that it's stopped at some point, but nothin

[ceph-users] Re: Have a problem with haproxy/keepalived/ganesha/docker

2024-04-16 Thread Eugen Block
Hm, no, I can't confirm it yet. I missed something in the config, the failover happens and a new nfs daemon is deployed on a different node. But I still see client interruptions so I'm gonna look into that first. Zitat von Eugen Block : Hi, I believe I can confirm your suspicion, I have a

[ceph-users] Re: Have a problem with haproxy/keepalived/ganesha/docker

2024-04-16 Thread Eugen Block
Ah, okay, thanks for the hint. In that case what I see is expected. Zitat von Robert Sander : Hi, On 16.04.24 10:49, Eugen Block wrote: I believe I can confirm your suspicion, I have a test cluster on Reef 18.2.1 and deployed nfs without HAProxy but with keepalived [1]. Stopping the active

[ceph-users] Re: Have a problem with haproxy/keepalived/ganesha/docker

2024-04-16 Thread Frank Schilder
Question about HA here: I understood the documentation of the fuse NFS client such that the connection state of all NFS clients is stored on ceph in rados objects and, if using a floating IP, the NFS clients should just recover from a short network timeout. Not sure if this is what should happe

[ceph-users] Announcing go-ceph v0.27.0

2024-04-16 Thread John Mulligan
We are happy to announce another release of the go-ceph API library. This is a regular release following our every-two-months release cadence. https://github.com/ceph/go-ceph/releases/tag/v0.27.0 The library includes bindings that aim to play a similar role to the "pybind" python bindings in the

[ceph-users] Re: reef 18.2.3 QE validation status

2024-04-16 Thread Yuri Weinstein
And approval is needed for: fs - Venky approved? powercycle - seems fs related, Venky, Brad PTL On Mon, Apr 15, 2024 at 5:55 PM Yuri Weinstein wrote: > > Still waiting for approvals: > > rados - Radek, Laura approved? Travis? Nizamudeen? > > ceph-volume issue was fixed by https://github.com/cep

[ceph-users] Re: [EXTERN] Re: Ceph 16.2.x mon compactions, disk writes

2024-04-16 Thread Dietmar Rieder
Hi Zakhar, hello List, I just wanted to follow up on this and ask a few quesitions: Did you noticed any downsides with your compression settings so far? Do you have all mons now on compression? Did release updates go through without issues? Do you know if this works also with reef (we see massiv

[ceph-users] Re: [EXTERN] Re: Ceph 16.2.x mon compactions, disk writes

2024-04-16 Thread Zakhar Kirpichenko
Hi, >Did you noticed any downsides with your compression settings so far? None, at least on our systems. Except the part that I haven't found a way to make the settings persist. >Do you have all mons now on compression? I have 3 out of 5 monitors with compression and 2 without it. The 2 monitor

[ceph-users] Re: [EXTERN] Re: Ceph 16.2.x mon compactions, disk writes

2024-04-16 Thread Eugen Block
You can use the extra container arguments I pointed out a few months ago. Those work in my test clusters, although I haven’t enabled that in production yet. But it shouldn’t make a difference if it’s a test cluster or not. 😉 Zitat von Zakhar Kirpichenko : Hi, Did you noticed any downsid

[ceph-users] Re: [EXTERN] Re: Ceph 16.2.x mon compactions, disk writes

2024-04-16 Thread Eugen Block
Sorry, I meant extra-entrypoint-arguments: https://www.spinics.net/lists/ceph-users/msg79251.html Zitat von Eugen Block : You can use the extra container arguments I pointed out a few months ago. Those work in my test clusters, although I haven’t enabled that in production yet. But it shoul

[ceph-users] Ceph Community Management Update

2024-04-16 Thread Josh Durgin
Hi everyone, I’d like to extend a warm thank you to Mike Perez for his years of service as community manager for Ceph. He is changing focuses now to engineering. The Ceph Foundation board decided to use services from the Linux Foundation to fulfill some community management responsibilities, rath

[ceph-users] Re: [EXTERN] Re: Ceph 16.2.x mon compactions, disk writes

2024-04-16 Thread Zakhar Kirpichenko
I remember that I found the part which said "if something goes wrong, monitors will fail" rather discouraging :-) /Z On Tue, 16 Apr 2024 at 18:59, Eugen Block wrote: > Sorry, I meant extra-entrypoint-arguments: > > https://www.spinics.net/lists/ceph-users/msg79251.html > > Zitat von Eugen Block

[ceph-users] Re: [EXTERN] Re: Ceph 16.2.x mon compactions, disk writes

2024-04-16 Thread Eugen Block
I understand, I just wanted to point out to be careful (as you always should be with critical services like MONs). If you apply it to one mon first you will see if it works or not. Then apply it to the other ones. Zitat von Zakhar Kirpichenko : I remember that I found the part which said

[ceph-users] Re: reef 18.2.3 QE validation status

2024-04-16 Thread Laura Flores
On behalf of @Radoslaw Zarzynski , rados approved. Below is the summary of the rados suite failures, divided by component. @Adam King @Venky Shankar PTAL at the orch and cephfs failures to see if they are blockers. Failures, unrelated: RADOS: 1. https://tracker.ceph.com/issues/65183 - Over

[ceph-users] Re: reef 18.2.3 QE validation status

2024-04-16 Thread Venky Shankar
On Tue, Apr 16, 2024 at 7:52 PM Yuri Weinstein wrote: > > And approval is needed for: > > fs - Venky approved? Could not get to this today. Will be done tomorrow. > powercycle - seems fs related, Venky, Brad PTL > > On Mon, Apr 15, 2024 at 5:55 PM Yuri Weinstein wrote: > > > > Still waiting for

[ceph-users] Re: reef 18.2.3 QE validation status

2024-04-16 Thread Adam King
> > Orchestrator: > 1. https://tracker.ceph.com/issues/64208 - test_cephadm.sh: Container > version mismatch causes job to fail. > Not a blocker issue. Just a problem with the test itself that will be fixed by https://github.com/ceph/ceph/pull/56714 O

[ceph-users] Re: Ceph User Dev Meeting next week: Ceph Users Feedback Survey Results

2024-04-16 Thread Neha Ojha
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph Leadership Team Meeting, 2024-04-08

2024-04-16 Thread Satoru Takeuchi
2024年4月9日(火) 8:06 Laura Flores : > I've added them! > I have a question about the role of github milestones. The two PRs in the v18.2.3 milestone are essential for debian package users as I described before. Are these PRs release blocker for v18.2.3 release? Or is the merging work done as best-e

[ceph-users] How to make config changes stick for MDS?

2024-04-16 Thread Erich Weiler
Hi All, I'm having a crazy time getting config items to stick on my MDS daemons. I'm running Reef 18.2.1 on RHEL 9 and the daemons are running in podman, I used cephadm to deploy the daemons. I can adjust the config items in runtime, like so: ceph tell mds.slugfs.pr-md-01.xdtppo config set

[ceph-users] Re: How to make config changes stick for MDS?

2024-04-16 Thread Stefan Kooman
On 17-04-2024 05:23, Erich Weiler wrote: Hi All, I'm having a crazy time getting config items to stick on my MDS daemons.  I'm running Reef 18.2.1 on RHEL 9 and the daemons are running in podman, I used cephadm to deploy the daemons. I can adjust the config items in runtime, like so: ceph

[ceph-users] Re: [EXTERN] Re: Ceph 16.2.x mon compactions, disk writes

2024-04-16 Thread Dietmar Rieder
Hi Zakhar, thanks so much for the information. Best D.Rieder On 4/16/24 17:40, Zakhar Kirpichenko wrote: Hi, >Did you noticed any downsides with your compression settings so far? None, at least on our systems. Except the part that I haven't found a way to make the settings persist. >D