Hi,
it sounds like the mds container_image is not configured properly, you
can set it via:
ceph config set mds container_image quay.io/ceph/ceph:v18.2.2
or just set it globally for all ceph daemons:
ceph config set global container_image quay.io/ceph/ceph:v18.2.2
If you bootstrap a fresh c
Hi,
if you assigned the SSD to be for block.db it won't be available from
the orchestrator's point of view as a data device. What you could try
is to manually create a partition or LV on the remaining SSD space and
then point the service spec to that partition/LV via path spec. I
haven't
Hi,
can you verify if all images are readable? Maybe there's a corrupt
journal for an image and it fails to read it? Just a wild guess, I
can't really interpret the stack trace. Or are there some images
without journaling enabled or something? Are there some logs
available, maybe even deb
Hi,
have you tried updating the label and the fsid in the osd's data directory?
ceph-bluestore-tool set-label-key --path /var/lib/ceph/osd/ceph-0 -k
ceph_fsid -v
And then you'll also need to change /var/lib/ceph/osd/ceph-0/ceph_fsid
to reflect the desired fsid. It's been a while since I h
First, I would restart the active mgr, the current status might be
outdated, I've seen that lots of times. If the pg is still in remapped
state, you'll need to provide a lot more information about your
cluster, the current osd tree, ceph status, the applied crush rule
etc. One possible root
Hi,
When upgrading a cephadm deployed quincy cluster to reef, there will be
no ceph-exporter service launched.
Being new in reef (from release notes: ceph-exporter: Now the
performance metrics for Ceph daemons are exported by ceph-exporter,
which deploys on each daemon rather than using prom
Hi,
is your cluster managed by cephadm? Because you refer to the manual
procedure in the docs, but they are probably referring to pre-cephadm
times when you had to use ceph-volume directly. If your cluster is
managed by cephadm I wouldn't intervene manually when the orchestrator
can help
Hi Justin,
You should able to delete inodes from the lost+found dirs just by simply
`sudo rm -rf lost+found/`
What do you get when you try to delete? Do you get `EROFS`?
On Fri, Aug 2, 2024 at 8:42 AM Justin Lee wrote:
> After we updated our ceph cluster from 17.2.7 to 18.2.0 the MDS kept bein
Hi,
I haven't seen that one yet. Can you show the output from these commands?
ceph orch client-keyring ls
ceph orch client-keyring set client.admin label:_admin
Is there anything helpful in the mgr log?
Zitat von "Alex Hussein-Kershaw (HE/HIM)" :
Hi,
I'm hitting an issue doing an offline in
Hello,
Not sure this exactly matches your case but you could try to reindex those
orphan objects with 'radosgw-admin object reindex --bucket {bucket_name}'. See
[1] for command arguments, like realm, zonegroup, zone, etc.
This command scans the data pool for objects that belong to a given bucket
So the mount hung? Can you see anything suspicious in the logs?
On Fri, Aug 2, 2024 at 7:17 PM Justin Lee wrote:
> Hi Dhairya,
>
> Thanks for the response! We tried removing it as you suggested with `rm
> -rf` but the command just hangs indefinitely with no output. We are also
> unable to `ls lo
You might want to to try my "bringing up an OSD really, really fast"
package (https://gogs.mousetech.com/mtsinc7/instant_osd).
It's actually for spinning up a VM with an OSD in it, although you can
skip the VM setup script if you're on a bare OS and just run the
Ansible part.
Apologies for anyone
The thing that stands out to me from that output was that the image has no
repo_digests. It's possible cephadm is expecting there to be digests and is
crashing out trying to grab them for this image. I think it's worth a try
to set mgr/cephadm/use_repo_digest to false, and then restart the mgr. FWI
ceph-exporter should get deployed by default with new installations on
recent versions, but as a general principle we've avoided adding/removing
services from the cluster during an upgrade. There is perhaps a case for
this service in particular if the user also has the rest of the monitoring
stack
14 matches
Mail list logo