[ceph-users] Re: squid 19.1.0 RC QE validation status

2024-07-02 Thread Ilya Dryomov
On Tue, Jul 2, 2024 at 9:13 PM Laura Flores wrote: > The rados suite, upgrade suite, and powercycle are approved by RADOS. > > Failures are summarized here: > https://tracker.ceph.com/projects/rados/wiki/SQUID#Squid-1910 > > @Ilya Dryomov , please see the upgrade/reef-x suite, > which had this RB

[ceph-users] Re: squid 19.1.0 RC QE validation status

2024-07-02 Thread Laura Flores
The rados suite, upgrade suite, and powercycle are approved by RADOS. Failures are summarized here: https://tracker.ceph.com/projects/rados/wiki/SQUID#Squid-1910 @Ilya Dryomov , please see the upgrade/reef-x suite, which had this RBD failure: - https://tracker.ceph.com/issues/63131 - TestMigr

[ceph-users] Re: squid 19.1.0 RC QE validation status

2024-07-02 Thread Ilya Dryomov
On Mon, Jul 1, 2024 at 8:41 PM Ilya Dryomov wrote: > > On Mon, Jul 1, 2024 at 4:24 PM Yuri Weinstein wrote: > > > > Details of this release are summarized here: > > > > https://tracker.ceph.com/issues/66756#note-1 > > > > Release Notes - TBD > > LRC upgrade - TBD > > > > (Reruns were not done yet

[ceph-users] Ceph 16.2 vs 18.2 use case Docker/Swarm LXC

2024-07-02 Thread filip Mutterer
Could I be missing any significant features when using Ceph 16.2 instead of 18.2 when using Docker/Swarm or LXC? I am asking, because I am struggling to set it up in Version 18.2 due to confusion with the debian 12 packages as some still stay in version 16.2 even after adding the sources for 1

[ceph-users] Re: CephFS constant high write I/O to the metadata pool

2024-07-02 Thread Anthony D'Atri
This was common in the NFS days, and some Linux distribution deliberately slew the execution time. find over an NFS mount was a sure-fire way to horque the server. (e.g. Convex C1) IMHO since the tool relies on a static index it isn't very useful, and I routinely remove any variant from my sys

[ceph-users] Re: CephFS constant high write I/O to the metadata pool

2024-07-02 Thread Olli Rajala
Hi - mostly as a note to future me and if anyone else looking for the same issue... I finally solved this a couple of months ago. No idea what is wrong with Ceph but the root cause that was triggering this MDS issue was that I had several workstations and a couple servers where the updatedb of "lo

[ceph-users] Re: reef 18.2.3 QE validation status

2024-07-02 Thread Yuri Weinstein
After fixing the issues identified below we cherry-picked all PRs from this list for 18.2.3 https://pad.ceph.com/p/release-cherry-pick-coordination. The question to the dev leads: do you think we can proceed with the release without rerunning suites, as they were already approved? Please reply wi

[ceph-users] Re: [EXTERN] Urgent help with degraded filesystem needed

2024-07-02 Thread Stefan Kooman
Hi Venky, On 02-07-2024 09:45, Venky Shankar wrote: Hi Stefan, On Mon, Jul 1, 2024 at 2:30 PM Stefan Kooman wrote: Hi Dietmar, On 29-06-2024 10:50, Dietmar Rieder wrote: Hi all, finally we were able to repair the filesystem and it seems that we did not lose any data. Thanks for all sugges

[ceph-users] Re: squid 19.1.0 RC QE validation status

2024-07-02 Thread Matan Breizman
crimson-rados approved. Failure fixes were backported to `squid` branch. Thanks, Matan On Mon, Jul 1, 2024 at 5:23 PM Yuri Weinstein wrote: > Details of this release are summarized here: > > https://tracker.ceph.com/issues/66756#note-1 > > Release Notes - TBD > LRC upgrade - TBD > > (Reruns wer

[ceph-users] Ceph Leadership Team Meeting, 2024-07-01

2024-07-02 Thread Ernesto Puerta
Hi Cephers, These are the topics that we discussed in our meeting: - [cbodley] rgw tech lead transitioning to Eric Ivancich and Adam Emerson - You may send your congrats to them! (offline) - [Zac] - last week's unfinished business - https://github.com/ceph/ceph/pull/58092 - [Zac

[ceph-users] Re: [EXTERN] Urgent help with degraded filesystem needed

2024-07-02 Thread Venky Shankar
Hi Stefan, On Mon, Jul 1, 2024 at 2:30 PM Stefan Kooman wrote: > > Hi Dietmar, > > On 29-06-2024 10:50, Dietmar Rieder wrote: > > Hi all, > > > > finally we were able to repair the filesystem and it seems that we did > > not lose any data. Thanks for all suggestions and comments. > > > > Here is