[ceph-users] Re: mds damaged with preallocated inodes that are inconsistent with inotable

2024-08-07 Thread Venky Shankar
On Thu, Aug 8, 2024 at 12:41 AM zxcs wrote: > > HI, Experts, > > we are running a cephfs with V16.2.*, and has multi active mds. Currently, we > are hitting a mds fs cephfs mds.* id damaged. and this mds always complain > > > “client *** loaded with preallocated inodes that are inconsistent wi

[ceph-users] Re: Please guide us inidentifying thecause ofthedata miss in EC pool

2024-08-07 Thread Frédéric Nass
Hi Chulin, Are you 100% sure that 494, 1169 and 1057 (that did not restart) were in the acting set at the exact moment the power outage occured?  I'm asking because min_size 6 would have allowed the data to be written to eventually 6 crashing OSDs. Bests, Frédéric. __

[ceph-users] Re: RGW sync gets stuck every day

2024-08-07 Thread Eugen Block
Hi, Redeploying stuff seems like a much too big hammer to get things going again. Surely there must be something more reasonable? wouldn't a restart suffice? Do you see anything in the 'radosgw-admin sync error list'? Maybe an error prevents the sync from continuing? Zitat von Olaf Seibe

[ceph-users] Re: Can you return orphaned objects to a bucket?

2024-08-07 Thread Frédéric Nass
Hi, You're right. The object reindex subcommand backport was rejected for P and is still pending for Q and R. [1] Use rgw-restore-bucket-index script instead. Regards, Frédéric. [1] https://tracker.ceph.com/issues/61405 De : vuphun...@gmail.com Envoyé : mercre

[ceph-users] Any way to put the rate limit on rbd flatten operation?

2024-08-07 Thread Henry lol
Hello, AFAIK, massive rx/tx occurs on the client side for the flatten operation. so, I want to control the network rate limit or predict the network bandwidth it will consume. Is there any way to do that? ___ ceph-users mailing list -- ceph-users@ceph.io

[ceph-users] mds damaged with preallocated inodes that are inconsistent with inotable

2024-08-07 Thread zxcs
HI, Experts, we are running a cephfs with V16.2.*, and has multi active mds. Currently, we are hitting a mds fs cephfs mds.* id damaged. and this mds always complain “client *** loaded with preallocated inodes that are inconsistent with inotable” and the mds always suicide during replay

[ceph-users] Re: Cephadm: unable to copy ceph.conf.new

2024-08-07 Thread Adam King
It might be worth trying to manually upgrade one of the mgr daemons. If you go to the host with a mgr and edit the /var/lib/ceph///unit.run so that the image specified in the long podman/docker run command in there is the 17.2.7 image. Then just restart its systemd unit (don't tell the orchestrator

[ceph-users] Multi-Site sync error with multipart objects: Resource deadlock avoided

2024-08-07 Thread Tino Lehnig
Hi, We've been trying to set up multi-site sync on two test VMs before rolling things out on actual production hardware. Both are running Ceph 18.2.4 deployed via cephadm. Host OS is Debian 12, container runtime is podman (switched from Debian 11 and docker.io, same error there). There is only

[ceph-users] Re: Pull failed on cluster upgrade

2024-08-07 Thread Nicola Mori
Thank you Konstantin, as it was foreseeable this problem didn't hit just me. So I hope the build of images based on CentOS Stream 8 will be resumed. Otherwise I'll try to build myself. Nicola smime.p7s Description: S/MIME Cryptographic Signature ___

[ceph-users] Re: Cephadm: unable to copy ceph.conf.new

2024-08-07 Thread Eugen Block
And are any of the hosts shown as offline in the 'ceph orch host ls' output? Is this the first upgrade you're attempting or did previous upgrades work with the current config? Zitat von Magnus Larsen : Hi, Sorry! fixed. The configuration is a follows: root@management-node1 # cat /etc/sudo

[ceph-users] Re: Cephadm: unable to copy ceph.conf.new

2024-08-07 Thread Magnus Larsen
Hi, Sorry! fixed. The configuration is a follows: root@management-node1 # cat /etc/sudoers.d/ceph ceph ALL=(ALL) NOPASSWD: ALL So.. no restrictions :^) Fra: Eugen Block Sendt: 7. august 2024 10:38 Til: Magnus Larsen Cc: ceph-users@ceph.io Emne: Re: Sv: [

[ceph-users] Re: Cephadm: unable to copy ceph.conf.new

2024-08-07 Thread Eugen Block
Hi, please don't drop the ML from your response. Is this the first upgrade you're attempting or did previous upgrades work with the current config? I wonder if can generate a new ssh configuration for the root user, and then use that to upgrade to the fixed version. The permissions will th

[ceph-users] Re: [EXTERN] Re: Pull failed on cluster upgrade

2024-08-07 Thread Dietmar Rieder
On 8/7/24 09:40, Konstantin Shalygin wrote: Hi, On 7 Aug 2024, at 10:31, Nicola Mori wrote: Unfortunately I'm on bare metal, with very old hardware so I cannot do much. I'd try to build a Ceph image based on Rocky Linux 8 if I could get the Dockerfile of the current image to start with, but

[ceph-users] Re: Pull failed on cluster upgrade

2024-08-07 Thread Konstantin Shalygin
Hi, > On 7 Aug 2024, at 10:31, Nicola Mori wrote: > > Unfortunately I'm on bare metal, with very old hardware so I cannot do much. > I'd try to build a Ceph image based on Rocky Linux 8 if I could get the > Dockerfile of the current image to start with, but I've not been able to find > it. Ca

[ceph-users] Re: Pull failed on cluster upgrade

2024-08-07 Thread Nicola Mori
Unfortunately I'm on bare metal, with very old hardware so I cannot do much. I'd try to build a Ceph image based on Rocky Linux 8 if I could get the Dockerfile of the current image to start with, but I've not been able to find it. Can you please help me with this? Cheers, Nicola smime.p7s De

[ceph-users] Re: Cephadm: unable to copy ceph.conf.new

2024-08-07 Thread Eugen Block
Hi, I commented a similar issue a couple of months ago: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/IQX2VXA6QQQPEZQ7GU3QY2WPHAIVPIUN/ Can you check if that applies to your cluster? Zitat von Magnus Larsen : Hi Ceph-users! Ceph version: ceph version 17.2.6 (d7ff0d10654