On Thu, Aug 8, 2024 at 12:41 AM zxcs wrote:
>
> HI, Experts,
>
> we are running a cephfs with V16.2.*, and has multi active mds. Currently, we
> are hitting a mds fs cephfs mds.* id damaged. and this mds always complain
>
>
> “client *** loaded with preallocated inodes that are inconsistent wi
Hi Chulin,
Are you 100% sure that 494, 1169 and 1057 (that did not restart) were in the
acting set at the exact moment the power outage occured?
I'm asking because min_size 6 would have allowed the data to be written to
eventually 6 crashing OSDs.
Bests,
Frédéric.
__
Hi,
Redeploying stuff seems like a much too big hammer to get things
going again. Surely there must be something more reasonable?
wouldn't a restart suffice?
Do you see anything in the 'radosgw-admin sync error list'? Maybe an
error prevents the sync from continuing?
Zitat von Olaf Seibe
Hi,
You're right. The object reindex subcommand backport was rejected for P and is
still pending for Q and R. [1]
Use rgw-restore-bucket-index script instead.
Regards,
Frédéric.
[1] https://tracker.ceph.com/issues/61405
De : vuphun...@gmail.com
Envoyé : mercre
Hello,
AFAIK, massive rx/tx occurs on the client side for the flatten operation.
so, I want to control the network rate limit or predict the network
bandwidth it will consume.
Is there any way to do that?
___
ceph-users mailing list -- ceph-users@ceph.io
HI, Experts,
we are running a cephfs with V16.2.*, and has multi active mds. Currently, we
are hitting a mds fs cephfs mds.* id damaged. and this mds always complain
“client *** loaded with preallocated inodes that are inconsistent with
inotable”
and the mds always suicide during replay
It might be worth trying to manually upgrade one of the mgr daemons. If you
go to the host with a mgr and edit the
/var/lib/ceph///unit.run so that the image specified
in the long podman/docker run command in there is the 17.2.7 image. Then
just restart its systemd unit (don't tell the orchestrator
Hi,
We've been trying to set up multi-site sync on two test VMs before rolling
things out on actual production hardware. Both are running Ceph 18.2.4 deployed
via cephadm. Host OS is Debian 12, container runtime is podman (switched from
Debian 11 and docker.io, same error there). There is only
Thank you Konstantin, as it was foreseeable this problem didn't hit just
me. So I hope the build of images based on CentOS Stream 8 will be
resumed. Otherwise I'll try to build myself.
Nicola
smime.p7s
Description: S/MIME Cryptographic Signature
___
And are any of the hosts shown as offline in the 'ceph orch host ls' output?
Is this the first upgrade you're attempting or did previous upgrades
work with the current config?
Zitat von Magnus Larsen :
Hi,
Sorry! fixed.
The configuration is a follows:
root@management-node1 # cat /etc/sudo
Hi,
Sorry! fixed.
The configuration is a follows:
root@management-node1 # cat /etc/sudoers.d/ceph
ceph ALL=(ALL) NOPASSWD: ALL
So.. no restrictions :^)
Fra: Eugen Block
Sendt: 7. august 2024 10:38
Til: Magnus Larsen
Cc: ceph-users@ceph.io
Emne: Re: Sv: [
Hi,
please don't drop the ML from your response.
Is this the first upgrade you're attempting or did previous upgrades
work with the current config?
I wonder if can generate a new ssh configuration for the root user,
and then use that to upgrade to the fixed version.
The permissions will th
On 8/7/24 09:40, Konstantin Shalygin wrote:
Hi,
On 7 Aug 2024, at 10:31, Nicola Mori wrote:
Unfortunately I'm on bare metal, with very old hardware so I cannot do much.
I'd try to build a Ceph image based on Rocky Linux 8 if I could get the
Dockerfile of the current image to start with, but
Hi,
> On 7 Aug 2024, at 10:31, Nicola Mori wrote:
>
> Unfortunately I'm on bare metal, with very old hardware so I cannot do much.
> I'd try to build a Ceph image based on Rocky Linux 8 if I could get the
> Dockerfile of the current image to start with, but I've not been able to find
> it. Ca
Unfortunately I'm on bare metal, with very old hardware so I cannot do
much. I'd try to build a Ceph image based on Rocky Linux 8 if I could
get the Dockerfile of the current image to start with, but I've not been
able to find it. Can you please help me with this?
Cheers,
Nicola
smime.p7s
De
Hi,
I commented a similar issue a couple of months ago:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/IQX2VXA6QQQPEZQ7GU3QY2WPHAIVPIUN/
Can you check if that applies to your cluster?
Zitat von Magnus Larsen :
Hi Ceph-users!
Ceph version: ceph version 17.2.6
(d7ff0d10654
16 matches
Mail list logo