Hi,
> On 30 Jul 2024, at 00:51, Christopher Durham wrote:
>
> I see that 18.2.4 is out, in rpm for el9 at:
> http://download.ceph.com/rpm-18.2.4/ Are there any plans for an '8' version?
> One of my clusters is not yet ready to update to Rocky 9. We will update to 9
> moving forward but this i
I'm very grateful for your detailed guide. I follow your commands, also upload
some objects
to hn1 before creating hn2. After ensuring realm pull on both endpoints output
the same result,
I need to upload one more file to hn1 to trigger the replication on old objects
and new objects.
This is a b
Yes, it only sync newly-uploaded object
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
I see that 18.2.4 is out, in rpm for el9 at:
http://download.ceph.com/rpm-18.2.4/ Are there any plans for an '8' version?
One of my clusters is not yet ready to update to Rocky 9. We will update to 9
moving forward but this itme around it would be good to have a Rocky 8 version.
Thanks!
c
Hi Huy,
The sync result you posted earlier appears to be from master zone. Have you
checked the secondary zone with 'radosgw-admin sync status --rgw-zone=hn2'?
Can you check that:
- sync user exists in the realm with 'radosgw-admin user list
--rgw-realm=multi-region'
- sync user's access_key an
This is a note meant to tag this issue for evaluation and likely inclusion in
the documentation in the near future (in August of 2024).
Zac Dover
Head of Documentation
Ceph Foundation
On Tuesday, June 11th, 2024 at 11:58 PM, Frank Schilder wrote:
>
>
> There is a tiny bit more to it. The
I think it used to be in the MONs before Octopus or even Nautilus, not
sure. At least that's an easy fix. ;-) Currently 90% of my advices is
to restart the MGR. :-D
Zitat von Frank Schilder :
Hi, would a mgr restart fix that?
It did! The one thing we didn't try last time. We thought the
> Hi, would a mgr restart fix that?
It did! The one thing we didn't try last time. We thought the message was stuck
in the MONs.
Thanks!
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Eugen Block
Sent: Monday, July 29, 2024
Hi, would a mgr restart fix that?
Zitat von Frank Schilder :
Very funny, it was actually me who made this case some time ago:
https://www.mail-archive.com/ceph-users@ceph.io/msg10095.html
I will look into what we did last time.
Best regards,
=
Frank Schilder
AIT Risø Campus
The data has been uploaded before the multisite setup is not replicated.
Do you mean that you uploaded the data while there was not replication
zone? Does it sync anything if you upload new data?
Zitat von Huy Nguyen :
Yes, that is the strange part. It says "data is caught up with source",
Update: snaptrim has started doing something. I see now the count of PGs that
are in active+clean (without snaptrim[-wait]) increasing.
I wonder if this started after taking an OSD out of the cluster; see also the
thread "0 slow ops message stuck for down+out OSD"
(https://lists.ceph.io/hyperki
Some additional info: my best bet is that the stuck snaptrim has to do with
image one-427. Not sure if this is a useful clue, the VM has 2 images and one
of these has an exclusive lock while the other doesn't. Both images are in use
though and having a lock is the standard situation. Here some o
We'd like to add our own noises of discontent from the public gallery.
We have three Ceph clusters running Alma 8.10, a total of 108 nodes,
storing ~40PiB of Science data.
Upgrading these nodes to Alma/Rocky/OEL/RHEL 9.x is something we have
been preparing for, but had not expected to do so f
Very funny, it was actually me who made this case some time ago:
https://www.mail-archive.com/ceph-users@ceph.io/msg10095.html
I will look into what we did last time.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
Fro
On Fri, Jul 26, 2024 at 04:18:05PM +0200, Iztok Gregori wrote:
On 26/07/24 12:35, Kai Stian Olstad wrote:
On Tue, Jul 23, 2024 at 08:24:21AM +0200, Iztok Gregori wrote:
Am I missing something obvious or with Ceph orchestrator there are
non way to specify an id during the OSD creation?
You can
Hi all,
we had a failing disk (with slow ops) and I shut down the OSD. Its status
down+out. However, I still see this message stuck in the output of ceph status
and ceph health detail:
0 slow ops, oldest one blocked for 70 sec, osd.183 has slow ops
I believe there was a case about that some ti
Hi all,
our cluster is octopus latest. We seem to have a problem with snaptrim. On a
pool for HDD RBD images I observed today that all PGs are either in state
snaptrim or snaptrim_wait. It looks like the snaptrim process does not actually
make any progress. There is no CPU activity by these OSD
17 matches
Mail list logo