[ceph-users] Re: RGW Data Loss Bug in Octopus 15.2.0 through 15.2.6

2024-03-27 Thread xu chenhui
Hi, Eric Ivancich I have similar problem in ceph version 16.2.5. Has this problem been completely resolved in Pacific version? Our bucket has no lifecycle rules and no copy operation. This is a very serious data loss issue for us and It happens occasionally in our environment. Detail desc

[ceph-users] Re: RGW Data Loss Bug in Octopus 15.2.0 through 15.2.6

2024-04-02 Thread xu chenhui
Jonas Nemeiksis wrote: > Hello, > > Maybe your issue depends to this https://tracker.ceph.com/issues/63642 > > > > On Wed, Mar 27, 2024 at 7:31 PM xu chenhui <xuchenhuig(a)gmail.com> > wrote: > > > Hi, Eric Ivancich > >I have similar pro

[ceph-users] Re: A couple OSDs not starting after host reboot

2024-04-04 Thread xu chenhui
Hi, Has there been any progress on this issue ? is there quick recover method? I have same problem with you that first 4k block of osd metadata is invalid. It will pay a heavy price to recreate osd. Thanks. ___ ceph-users mailing list -- ceph-users

[ceph-users] Re: A couple OSDs not starting after host reboot

2024-04-05 Thread xu chenhui
Hi, Igor Thank you for providing the repair procedure. I will try it when I am back to my workstation. Can you provide any possible reasons for this problem? ceph version: v16.2.5 error info: systemd[1]: Started Ceph osd.307 for 02eac9e0-d147-11ee-95de-f0b2b90ee048. bash[39068]: Running comman

[ceph-users] Re: A couple OSDs not starting after host reboot

2024-04-11 Thread xu chenhui
Igor Fedotov wrote: > Hi chenhui, > > there is still a work in progress to support multiple labels to avoid > the issue (https://github.com/ceph/ceph/pull/55374). But this is of > little help for your current case. > > If your disk is fine (meaning it's able to read/write block at offset 0) >

[ceph-users] rgw crash on thread notif-worker0

2025-05-16 Thread xu chenhui
Hi, all rgw crash on thread notif-worker0 when using rgw persistent notification function in ceph version: v16.2.13. I haven't find root cause. Backtrace: 0> 2025-05-13T13:36:45.435+ 7f0813c64700 -1 *** Caught signal (Aborted) ** in thread 7f0813c64700 thread_name:notif-wor

[ceph-users] Re: rgw crash on thread notif-worker0

2025-05-18 Thread xu chenhui
xu chenhui wrote: > Hi, all > > rgw crash on thread notif-worker0 when using rgw persistent notification > function in > ceph version: v16.2.13. I haven't find root cause. > > Backtrace: > > 0> 2025-05-13T13:36:45.435+ 7f0813c64700 -1

[ceph-users] rgw crash in RGWDeleteMultiObj::handle_individual_object for bucket notification reserve

2025-05-23 Thread xu chenhui
Hi, RGW enable bucket notification for http push_endpoint, it will crash when exec delete multi objects command. ceph version: v16.2.13 Thread 483 "radosgw" received signal SIGABRT, Aborted. [Switching to Thread 0x7f94c0ac1700 (LWP 500)] 0x7f95b86f2acf in raise () from /lib64/libc.so.6 (gdb)