[ceph-users] Re: Question about PR merge

2024-04-17 Thread Xiubo Li
Hi Nigel, The logs you provide is totally a different issue, it's deadlock between two MDSs for a rename request. I will continue work on it today and tomorrow. While Erich's is mostly like the lock order issue as I mentioned in the previous mails, but still waiting the debug logs to confirm

[ceph-users] Re: Question about PR merge

2024-04-17 Thread Nigel Williams
Hi Xiubo, Is the issue we provided logs on the same as Erich or is that a third different locking issue? thanks, nigel. On Thu, 18 Apr 2024 at 12:29, Xiubo Li wrote: > > On 4/18/24 08:57, Erich Weiler wrote: > >> Have you already shared information about this issue? Please do if not. > > > > I

[ceph-users] Re: crushmap history

2024-04-17 Thread Blair Bethwaite
n a MON in production right now to > compare if there are more committed versions or something. And > obviously, the result is not what I would usually expect from a > crushmap. I also injected a modified monmap to provoke a new version: > > # ceph osd setcrushmap -i 2024041

[ceph-users] Re: Client kernel crashes on cephfs access

2024-04-17 Thread Konstantin Shalygin
Hi Xiubo, Seems patch already landed to kernel 6.8.7, thanks! k Sent from my iPhone > On 18 Apr 2024, at 05:31, Xiubo Li wrote: > > Hi Konstantin, > > We have fixed it, please see > https://patchwork.kernel.org/project/ceph-devel/list/?series=842682&archive=both. > > - Xiubo __

[ceph-users] Re: Client kernel crashes on cephfs access

2024-04-17 Thread Xiubo Li
Hi Konstantin, We have fixed it, please see https://patchwork.kernel.org/project/ceph-devel/list/?series=842682&archive=both. - Xiubo On 4/18/24 00:05, Konstantin Shalygin wrote: Hi, On 9 Apr 2024, at 04:07, Xiubo Li wrote: Thanks for reporting this, I generated one patch to fix it. Will

[ceph-users] Re: Question about PR merge

2024-04-17 Thread Xiubo Li
On 4/18/24 08:57, Erich Weiler wrote: Have you already shared information about this issue? Please do if not. I am working with Xiubo Li and providing debugging information - in progress! From the blocked ops output it very similiar the same issue as Patrick's lock order fixed before. I

[ceph-users] Re: Question about PR merge

2024-04-17 Thread Erich Weiler
Have you already shared information about this issue? Please do if not. I am working with Xiubo Li and providing debugging information - in progress! I was wondering if it would be included in 18.2.3 which I *think* should be released soon? Is there any way of knowing if that is true? Thi

[ceph-users] Re: Question about PR merge

2024-04-17 Thread Patrick Donnelly
On Wed, Apr 17, 2024 at 11:36 AM Erich Weiler wrote: > > Hello, > > We are tracking PR #56805: > > https://github.com/ceph/ceph/pull/56805 > > And the resolution of this item would potentially fix a pervasive and > ongoing issue that needs daily attention in our cephfs cluster. Have you already s

[ceph-users] Status of Seastore and Crimson

2024-04-17 Thread R A
Hello, is there any estimation when seastore and crimson will become production ready? BR Reza ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Client kernel crashes on cephfs access

2024-04-17 Thread Konstantin Shalygin
Hi, > On 9 Apr 2024, at 04:07, Xiubo Li wrote: > > Thanks for reporting this, I generated one patch to fix it. Will send it out > after testing is done. Trace from our users, but from mainline kernel. Look like as trace above kernel: [ cut here ] kernel: list_add corr

[ceph-users] Question about PR merge

2024-04-17 Thread Erich Weiler
Hello, We are tracking PR #56805: https://github.com/ceph/ceph/pull/56805 And the resolution of this item would potentially fix a pervasive and ongoing issue that needs daily attention in our cephfs cluster. I was wondering if it would be included in 18.2.3 which I *think* should be release

[ceph-users] Re: reef 18.2.3 QE validation status

2024-04-17 Thread Laura Flores
Hey all, I wanted to point out a message on the user list: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/KJXKJ2WR5PRQMWXIL33BKCSXJT7Q6VUA/ There are two PRs that were added later to the 18.2.3 milestone concerning debian packaging: https://github.com/ceph/ceph/pulls?q=is%3Apr+i

[ceph-users] Re: cephadm custom jinja2 service templates

2024-04-17 Thread Frédéric Nass
Hello Felix, You can download haproxy.cfg.j2 and keepalived.conf.j2 from here [1], tweak them to your needs and set them via: ceph config-key set mgr/cephadm/services/ingress/haproxy.cfg -i haproxy.cfg.j2 ceph config-key set mgr/cephadm/services/ingress/keepalived.conf -i keepalived.conf.j2

[ceph-users] cephadm custom jinja2 service templates

2024-04-17 Thread Stolte, Felix
Hi folks, I would like to use a custom jina2 template for an ingress service for rendering the keepalived and haproxy config. Can someone tell me how to override the default templates? Best regards Felix

[ceph-users] (deep-)scrubs blocked by backfill

2024-04-17 Thread Frank Schilder
Hi all, I have a technical question about scrub scheduling. I replaced a disk and it is back-filling slowly. We have set osd_scrub_during_recovery = true and still observe that scrub times continuously increase (number of PGs not scrubbed in time is continuously increasing). Investigating the si

[ceph-users] Prevent users to create buckets

2024-04-17 Thread sinan
Hello, I am using Ceph RGW for S3. Is it possible to create (sub)users that cannot create/delete buckets and are limited to specific buckets? At the end, I want to create 3 separate users and for each user I want to create a bucket. The users should only have access to their own bucket and s

[ceph-users] Re: reef 18.2.3 QE validation status

2024-04-17 Thread Venky Shankar
On Sat, Apr 13, 2024 at 12:08 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/65393#note-1 > Release Notes - TBD > LRC upgrade - TBD > > Seeking approvals/reviews for: > > smoke - infra issues, still trying, Laura PTL > > rados - Radek,

[ceph-users] Re: reef 18.2.3 QE validation status

2024-04-17 Thread Venky Shankar
Hi Yuri, On Tue, Apr 16, 2024 at 7:52 PM Yuri Weinstein wrote: > > And approval is needed for: > > fs - Venky approved? fs approved. failures are: https://tracker.ceph.com/projects/cephfs/wiki/Reef#2024-04-17 > powercycle - seems fs related, Venky, Brad PTL > > On Mon, Apr 15, 2024 at 5:55 PM Y

[ceph-users] Re: Performance of volume size, not a block size

2024-04-17 Thread Mitsumasa KONDO
Hi Johansson-san, Thank you very much for your detailed explanation. I read some documents in Ceph community. So I generally understand. Thank you very much for all the useful advice. The characteristics of distributed storage seem to be quite complex, so I will investigate various things when I h

[ceph-users] Re: feature_map differs across mon_status

2024-04-17 Thread Eugen Block
Hi, without looking too deep into it, I would just assume that the daemons and clients are connected to different MONs. Or am I misunderstanding your question? Zitat von Joel Davidow : Just curious why the feature_map portions differ in the return of mon_status across a cluster. Below i

[ceph-users] Re: [EXTERN] cephFS on CentOS7

2024-04-17 Thread Dario Graña
Hi Dietmar, Thank you for your answer! I will test this approach. Regards! On Tue, Apr 16, 2024 at 11:12 AM Dietmar Rieder wrote: > Hello, > > we a run CentOS 7.9 client to access cephfs on a Ceph Reef (18.2.2) > Cluster and it works just fine using the kernel client that comes with > CentOS 7

[ceph-users] Re: crushmap history

2024-04-17 Thread Eugen Block
I don't have the option to shut down a MON in production right now to compare if there are more committed versions or something. And obviously, the result is not what I would usually expect from a crushmap. I also injected a modified monmap to provoke a new version: # ceph osd se