[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-20 Thread Venky Shankar
Hi Yuri, On Fri, Oct 20, 2023 at 9:44 AM Venky Shankar wrote: > > Hi Yuri, > > On Thu, Oct 19, 2023 at 10:48 PM Venky Shankar wrote: > > > > Hi Yuri, > > > > On Thu, Oct 19, 2023 at 9:32 PM Yuri Weinstein wrote: > > > > > > We are still finishing off: > > > > > > - revert PR https://github.com/

[ceph-users] Re: ATTN: DOCS rgw bucket pubsub notification.

2023-10-20 Thread Zac Dover
Artem, Thanks for this, but I don't quite understand what should be changed. Your suggested change is present in the documentation as of e25a7e86e9ca5acc11171d536e386277a783760d. Zac Dover Upstream Docs Ceph Foundation --- Original Message --- On Saturday, October 21st, 2023 at 1:13

[ceph-users] ATTN: DOCS rgw bucket pubsub notification.

2023-10-20 Thread Artem Torubarov
Hi, The latest documentation says that: "pulling and acking of events stored in Ceph (as an internal destination)." is a valid source for s3 notifications. If i got it right, pull-based subscription

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-20 Thread Patrick Begou
Hi all, ending with git bisect just now shows: 4fc6bc394dffaf3ad375ff29cbb0a3eb9e4dbefc is the first bad commit commit 4fc6bc394dffaf3ad375ff29cbb0a3eb9e4dbefc Author: Zack Cerza Date:   Tue May 17 11:29:02 2022 -0600     ceph-volume: Optionally consume loop devices     A similar proposal was

[ceph-users] Re: Ceph 16.2.14: OSDs randomly crash in bstore_kv_sync

2023-10-20 Thread Zakhar Kirpichenko
Thank you, Igor. I was just reading the detailed list of changes for 16.2.14, as I suspected that we might not be able to go back to the previous minor release :-) Thanks again for the suggestions, we'll consider our options. /Z On Fri, 20 Oct 2023 at 16:08, Igor Fedotov wrote: > Zakhar, > > my

[ceph-users] Re: Ceph 16.2.14: OSDs randomly crash in bstore_kv_sync

2023-10-20 Thread Igor Fedotov
Zakhar, my general concern about downgrading to previous versions is that this procedure is generally neither assumed nor tested by dev team. Although is possible most of the time. But in this specific case it is not doable due to (at least) https://github.com/ceph/ceph/pull/52212 which enable

[ceph-users] Re: Ceph 16.2.14: OSDs randomly crash in bstore_kv_sync

2023-10-20 Thread Tyler Stachecki
On Fri, Oct 20, 2023, 8:51 AM Zakhar Kirpichenko wrote: > > We would consider upgrading, but unfortunately our Openstack Wallaby is > holding us back as its cinder doesn't support Ceph 17.x, so we're stuck > with having to find a solution for Ceph 16.x. > Wallaby is also quite old at this time..

[ceph-users] Re: Ceph 16.2.14: OSDs randomly crash in bstore_kv_sync

2023-10-20 Thread Zakhar Kirpichenko
Thanks, Tyler. I appreciate what you're saying, though I can't fully agree: 16.2.13 didn't have crashing OSDs, so the crashes in 16.2.14 seem like a regression - please correct me if I'm wrong. If it is indeed a regression, then I'm not sure that suggesting to upgrade is the right thing to do in th

[ceph-users] Re: Ceph 16.2.14: OSDs randomly crash in bstore_kv_sync

2023-10-20 Thread Tyler Stachecki
On Fri, Oct 20, 2023, 8:11 AM Zakhar Kirpichenko wrote: > Thank you, Igor. > > It is somewhat disappointing that fixing this bug in Pacific has such a low > priority, considering its impact on existing clusters. > Unfortunately, the hard truth here is that Pacific (stable) was released over 30 m

[ceph-users] Re: Ceph 16.2.14: OSDs randomly crash in bstore_kv_sync

2023-10-20 Thread Zakhar Kirpichenko
Thank you, Igor. It is somewhat disappointing that fixing this bug in Pacific has such a low priority, considering its impact on existing clusters. The document attached to the PR explicitly says about `level_compaction_dynamic_level_bytes` that "enabling it on an existing DB requires special cau

[ceph-users] Re: Ceph 16.2.14: OSDs randomly crash in bstore_kv_sync

2023-10-20 Thread Igor Fedotov
Hi Zakhar, Definitely we expect one more (and apparently the last) Pacific minor release. There is no specific date yet though - the plans are to release Quincy and Reef minor releases prior to it. Hopefully to be done before the Christmas/New Year. Meanwhile you might want to workaround the

[ceph-users] Re: fixing future rctime

2023-10-20 Thread David C.
(Re) Hi Arnaud, Work by Mouratidis Theofilos (closed/not merged) : https://github.com/ceph/ceph/pull/37938 Maybe ask him if he found a trick Cordialement, *David CASIER* Le ven.

[ceph-users] Re: fixing future rctime

2023-10-20 Thread David C.
Someone correct me if I'm saying something stupid but from what I see in the code, there is a check each time to make sure rctime doesn't go back. Which seems logical to me because otherwise you would have to go through all the children to determine the correct ctime. I don't have the impression t

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-20 Thread 544463199
hi johan, The OS that I use is centos 8.3,The feedback result of the ceph-volume inventory command in ceph17.2.5 is empty, but ceph orch daemon add osd can add osd. Hope it helps you. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscri

[ceph-users] fixing future rctime

2023-10-20 Thread MARTEL Arnaud
Hi all, I have some troubles with my backup script because there are few files, in a deep sub-directory, with a creation/modification date in the future (for example: 2040-02-06 18:00:00). As my script uses the ceph.dir.rctime extended attribute to identify the files and directories to backup,