[ceph-users] Re: ceph quincy repo update to debian bookworm...?

2023-07-21 Thread Luke Hall
Ditto this query. I can't recall if there's a separate list for Debian packaging of Ceph or not. On 22/06/2023 15:25, Christian Peters wrote: Hi ceph users/maintainers, I installed ceph quincy on debian bullseye as a ceph client and now want to update to bookworm. I see that there is at the

[ceph-users] July Ceph Science Virtual User Group

2023-07-21 Thread Kevin Hrpcek
Hey all, We will be having a Ceph science/research/big cluster call on Wednesday July 26th. If anyone wants to discuss something specific they can add it to the pad linked below. If you have questions or comments you can contact me. This is an informal open call of community members mostly f

[ceph-users] Re: cephfs - unable to create new subvolume

2023-07-21 Thread Patrick Donnelly
Hello karon, On Fri, Jun 23, 2023 at 4:55 AM karon karon wrote: > > Hello, > > I recently use cephfs in version 17.2.6 > I have a pool named "*data*" and a fs "*kube*" > it was working fine until a few days ago, now i can no longer create a new > subvolume*, *it gives me the following error: > >

[ceph-users] Re: what is the point of listing "auth: unable to find a keyring on /etc/ceph/ceph.client nfs-ganesha

2023-07-21 Thread Dhairya Parmar
Okay then I'd suggest adding keyring to the client section in ceph.conf, it is as simple as keyring = /keyring I hope the client(that the logs complain) is in the keyring file. Do let me know if that works for you, if not, some logs would be good to have to diagnose further. On Fri, Jul 21, 2023

[ceph-users] Re: OSD tries (and fails) to scrub the same PGs over and over

2023-07-21 Thread Vladimir Brik
> what's the cluster status? Is there recovery or backfilling > going on? No. Everything is good except this PG is not getting scrubbed. Vlad On 7/21/23 01:41, Eugen Block wrote: Hi, what's the cluster status? Is there recovery or backfilling going on? Zitat von Vladimir Brik : I have a P

[ceph-users] Re: what is the point of listing "auth: unable to find a keyring on /etc/ceph/ceph.client nfs-ganesha

2023-07-21 Thread Marc
Hi Dhairya, Yes I have in ceph.conf (only copied the lines below, there are more in these sections). I do not have a keyring path setting in ceph.conf public network = a.b.c.111/24 [mon] mon host = a.b.c.111,a.b.c.112,a.b.c.113 [mon.a] mon addr = a.b.c.111 [mon.b] mon addr = a.b.c.112 [mon.

[ceph-users] Re: MDS cache is too large and crashes

2023-07-21 Thread Patrick Donnelly
Hello Sake, On Fri, Jul 21, 2023 at 3:43 AM Sake Ceph wrote: > > At 01:27 this morning I received the first email about MDS cache is too large > (mailing happens every 15 minutes if something happens). Looking into it, it > was again a standby-replay host which stops working. > > At 01:00 a few

[ceph-users] Re: what is the point of listing "auth: unable to find a keyring on /etc/ceph/ceph.client nfs-ganesha

2023-07-21 Thread Dhairya Parmar
Hi Marc, Can you confirm if the mon ip in ceph.conf is correct and is public; also the keyring path is specified correctly? *Dhairya Parmar* Associate Software Engineer, CephFS Red Hat Inc. dpar...@redhat.com On Thu, Jul 20, 2023 at 9:40 P

[ceph-users] Re: MDS stuck in rejoin

2023-07-21 Thread Xiubo Li
On 7/20/23 22:09, Frank Schilder wrote: Hi all, we had a client with the warning "[WRN] MDS_CLIENT_OLDEST_TID: 1 clients failing to advance oldest client/flush tid". I looked at the client and there was nothing going on, so I rebooted it. After the client was back, the message was still there

[ceph-users] Re: MDS cache is too large and crashes

2023-07-21 Thread Marc
> > At 01:27 this morning I received the first email about MDS cache is too > large (mailing happens every 15 minutes if something happens). Looking > into it, it was again a standby-replay host which stops working. > > At 01:00 a few rsync processes start in parallel on a client machine. > This

[ceph-users] MDS cache is too large and crashes

2023-07-21 Thread Sake Ceph
At 01:27 this morning I received the first email about MDS cache is too large (mailing happens every 15 minutes if something happens). Looking into it, it was again a standby-replay host which stops working. At 01:00 a few rsync processes start in parallel on a client machine. This copies data