Hi Thomas,
I agree, from my point of view this shouldn't be an issue. And
although I usually stick to the documented process, especially with
products like SUSE Enterprise Storage (which was decommissioned),
there are/were customers who had services colocated, for example MON,
MGR and RGW
Hi Yuri,
On Tue, Aug 6, 2024 at 2:03 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/67340#note-1
>
> Release Notes - N/A
> LRC upgrade - N/A
> Gibba upgrade -TBD
>
> Seeking approvals/reviews for:
>
> rados - Radek, Laura (https://git
Hi all,
there is still time to submit your proposals; the CFP has been extended \o/
- August 16 12 - Call for Proposals due EXTENDED!
Cheers!!!
-- Forwarded message -
From: Alvaro Soto
Date: Thu, Jul 25, 2024 at 3:27 PM
Subject: Fwd: [community] [OpenInfra Event Update] The
Hi
I'm reaching out to check on the status of the XFS deadlock issue with RBD
in hyperconverged environments, as detailed in Ceph tracker issue #43910 (
https://tracker.ceph.com/issues/43910?tab=history). It looks like there
hasn’t been much activity on this for a while, and I'm wondering if there
We are happy to announce another release of the go-ceph API library. This
is a
regular release following our every-two-months release cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.29.0
The library includes bindings that aim to play a similar role to the
"pybind"
python bindings in the
Hi Torkil
i would check the logs of the firewalls. First I would check the palo alto
firewall logs.
Joachim
Am Di., 13. Aug. 2024 um 14:36 Uhr schrieb Eugen Block :
> Hi Torkil,
>
> did anything change in the network setup? If those errors haven't
> popped up before, what changed? I'm not sure i
Am 13.08.24 um 15:02 schrieb Ilya Dryomov:
On Mon, Aug 12, 2024 at 1:17 PM Oliver Freyermuth
wrote:
Am 12.08.24 um 12:16 schrieb Ilya Dryomov:
On Mon, Aug 12, 2024 at 11:28 AM Oliver Freyermuth
wrote:
Am 12.08.24 um 11:09 schrieb Ilya Dryomov:
On Mon, Aug 12, 2024 at 10:20 AM Oliver Freye
On Mon, Aug 12, 2024 at 1:17 PM Oliver Freyermuth
wrote:
>
> Am 12.08.24 um 12:16 schrieb Ilya Dryomov:
> > On Mon, Aug 12, 2024 at 11:28 AM Oliver Freyermuth
> > wrote:
> >>
> >> Am 12.08.24 um 11:09 schrieb Ilya Dryomov:
> >>> On Mon, Aug 12, 2024 at 10:20 AM Oliver Freyermuth
> >>> wrote:
> >
Hi,
after some more unsucessful attempts I've decided to start over and remove the
cluster on the receiving side and then run cephadm bootstrap again. That
magically fixed the issue. It must have been something previously configured
causing this error but I have no idea what. Anyways, works for
Hi Torkil,
did anything change in the network setup? If those errors haven't
popped up before, what changed? I'm not sure if I have seen this one
yet...
Zitat von Torkil Svensgaard :
Ceph version 18.2.1.
We have a nightly backup job snapshotting and exporting all RBDs
used for libvirt
Interesting, apparently the number one provides in the 'ceph log last
' command is not the number of lines to display but the number of
lines to search for a match.
So in your case you should still see your osd log output about the
large omap if you pick a large enough number. My interpretati
Hi all,
The Ceph documentation has always recommended upgrading RGWs last when doing a
upgrade. Is there a reason for this? As they're mostly just RADOS clients you
could imagine the order doesn't matter as long as the cluster and RGW major
versions are compatible. Our basic testing has shown n
12 matches
Mail list logo