Hi everyone.
Few time ago I add a new node to my cluster with some HDD.
Currently the cluster does the remapping and backfill.
I now got a warning about
HEALTH_WARN 1 pgs not deep-scrubbed in time
So I check and find something a litle weird.
root@cthulhu1:~# ceph config get osd osd_deep_sc
I assume that the OSD maybe had some backfill going on, hence waiting
for the scheduled deep-scrub to start. There are config options which
would allow deep-scrubs during recovery, I believe, but if it's not a
real issue, you can leave it as is.
Zitat von Albert Shih :
Le 20/09/2024 à 11:
Le 20/09/2024 à 11:01:20+0200, Albert Shih a écrit
Hi,
>
> >
> > > Is they are any way to find which pg ceph status are talking about.
> >
> > 'ceph health detail' will show you which PG it's warning about.
>
> Too easy for me ;-) ;-)...Thanks ;-)
>
> >
> > > Is they are any way to see
Le 20/09/2024 à 08:00:16+, Eugen Block a écrit
Hi,
>
> there's some ratio involved when deep-scrubs are checked:
>
> (mon_warn_pg_not_deep_scrubbed_ratio * deep_scrub_interval) +
> deep_scrub_interval
>
> So based on the defaults, ceph would only warn if the last deep-scrub
> timestamp is
Stefan, Anthony,
Anthony's sequence of commands to reclassify the root failed with errors.
so I have tried to look a little deeper.
I can now see the extra root via 'ceph osd crush tree --show-shadow'.
Looking at the decompiled crush tree, I can also see the extra root:
root default {
id
Hello all
I tried configuring cephfs mirroring between cluster as per doc
https://docs.ceph.com/en/reef/dev/cephfs-mirroring/ using bootstrap.
But My replication is not working.
Using two mirror daemons on each cluster. [Enabled module/mirroring/snap
mirroring/aded directory/allow_new_snaps]
Hi,
We are using 17.2.7 currently.
FYI I tried the --fix command from a newer version and it crashes instantly.
> podman run -it --rm -v /etc/ceph:/etc/ceph:ro quay.io/ceph/ceph:v18.2.4
> /bin/bash
> [root@7f786047ee20 /]# radosgw-admin bucket check --check-objects
> --bucket mimir-prod --fix
>
Well, it was pasted from a local cluster, meant as a guide not to be run
literally.
> On Sep 20, 2024, at 12:48 PM, Dave Hall wrote:
>
> Stefan, Anthony,
>
> Anthony's sequence of commands to reclassify the root failed with errors. so
> I have tried to look a little deeper.
>
> I can now see
Hi,
We have a multisite Ceph configuration, with http (not https) sync
endpoints. Are all sync traffic in plain text?
We have concerns about metadata. For example, when syncing a newly created
user and its access key and secret key from Master zone to a secondary
zone, are the keys also in plain t
Oddly, the Nautilus cluster that I'm gradually decommissioning seems to
have the same shadow root pattern in its crush map. I don't know if that
really means anything, but at least I know it's not something I did
differently when I set up the new Reef cluster.
-Dave
--
Dave Hall
Binghamton Unive
Hi,
there's some ratio involved when deep-scrubs are checked:
(mon_warn_pg_not_deep_scrubbed_ratio * deep_scrub_interval) +
deep_scrub_interval
So based on the defaults, ceph would only warn if the last deep-scrub
timestamp is older than:
(0.75 * 7 days) + 7 days = 12.25 days
Note that
Hi Reid,
Only the metadata / index side. "invalid_multipart_entries" relates to
multipart index entries that don't have a corresponding .meta index entry
anymore, the entry listing all parts of a multipart upload.
The --fix should have removed these multipart index entries from the bucket
ind
12 matches
Mail list logo