Hey Adam,
On 11.01.25 12:52 AM, Adam Emerson wrote:
On 10/01/2025, Yuri Weinstein wrote:
This PR https://github.com/ceph/ceph/pull/61306 was cherry-picked
Adam, pls see the run for the Build 4
Laura, Adam approves rgw, we are ready for gibba and LRC/sepia upgrades.
I hereby approve the RGW r
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi everyone. Yes. All the tips definitely helped! Now I have more free
space in the pools, the number of misplaced PG's decreased a lot and lower
std deviation of the usage of OSD's. The storage looks way healthier now.
Thanks a bunch!
I'm only confused by the number of misplaced PG's which never
Hi,
Amazon released a new version of their cli today
https://github.com/aws/aws-cli/tags and seems to break our stuffs with the
following error when PUT object happens:
bash-4.2# /usr/local/bin/aws --endpoint=https://endpoint --no-verify-ssl s3 cp
online.txt s3://bucket/
upload failed: ./onlin
When running the Rook CI against the latest squid devel image, we are
seeing issues creating OSDs, investigating with Guillaume...
https://github.com/rook/rook/issues/15282
Travis
On Wed, Jan 15, 2025 at 7:57 AM Laura Flores wrote:
> The Gibba cluster has been upgraded.
>
> On Wed, Jan 15, 2025
The Gibba cluster has been upgraded.
On Wed, Jan 15, 2025 at 7:27 AM Christian Rohmann <
christian.rohm...@inovex.de> wrote:
> Hey Adam,
>
> On 11.01.25 12:52 AM, Adam Emerson wrote:
> > On 10/01/2025, Yuri Weinstein wrote:
> >> This PR https://github.com/ceph/ceph/pull/61306 was cherry-picked
>
Dear all,
we finally managed to collect perf data and it seems to show a smoking gun.
Since this thread is already heavily cluttered on lists.ceph.io I started a new
one: "MDS hung in purge_stale_snap_data after populating cache"
(https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/
Hi all,
this post is related to "Help needed, ceph fs down due to large stray dir"
(https://www.spinics.net/lists/ceph-users/msg85394.html).
We finally got the MDS up and could record what it is doing before and after
getting hung. We opened tracker https://tracker.ceph.com/issues/69547 . The
Hi Dan,
we finally managed to get everything up and collect debug info. Its ceph-posted
since all files exceeded the limit for attachments. A quick overview of the
most important findings is here: https://imgur.com/a/RF7ExSP. Please note that
I started a new thread to reduce clutter: "MDS hung
Hi all,
during debugging of an MDS problem we observed something that looks odd. The
output of perf top seems to show symbols from v15 (octopus) on a pacific (v16)
installation:
23.56% ceph-mds [.] std::_Rb_tree
7.02% libceph-common.so.2 [.] ceph::buffer::v15_2_0::ptr::cop
Hi all,
It was brought to my attention that some users were unaware that the old
User + Dev meetup day changed from Thursdays to Wednesdays.
To stay informed about future User + Dev meetings, you can:
1. Join the Meetup group here: https://www.meetup.com/ceph-user-group/,
which will allow you to
On 15/01/2025, Christian Rohmann wrote:
> Is the broken rgw_s3_auth_order https://tracker.ceph.com/issues/68393 not
> relevant enough for the release then?
> There is a PR open https://github.com/ceph/ceph/pull/61162
>
> Also there are some desperate comments about this breaking / hindering
> mul
12 matches
Mail list logo