Hi,
Our Ceph 16.2.x cluster managed by cephadm is logging a lot of very
detailed messages, Ceph logs alone on hosts with monitors and several OSDs
has already eaten through 50% of the endurance of the flash system drives
over a couple of years.
Cluster logging settings are default, and it seems t
Hi.
Thank you! We will try to upgrade to 17.2.6.
Michal
On 9/18/23 12:51, Berger Wolfgang wrote:
Hi Michal,
I dont see any errors on versions 17.2.6 and 18.2.0 using veeam 12.0.0.1420.
In my setup, I am using a dedicated nginx proxy (not managed by ceph) to reach
the upstream rgw instances.
B
Hi Josh,
Thanks a million, your proposed solution worked.
Best,
Nick
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
We are facing the similar issue where we are seeing "libceph: wrong peer, want
, got " in our dmesg as well.
Servers are running Ubuntu 20.04.6 kernel verison: 5.15.0-79-generic
K8s: 1.27.4
containerd:1.6.22
rook: 1.12.1
Ceph: 18.2.0
The rook and ceph versions were recently upgraded from 1
Hi,
at one of my clusters, i'm having a problem with authorizing new clients to an
existing share.
The problem exists on all of the 3 nodes with the same error.
Due to a similar command is working on another cluster, something seems to be
wrong here. I also tried to delete and recreate the fs a
So I'm getting this warning (although there are no noticeable problems in the
cluster):
$ ceph health detail
HEALTH_WARN 1 MDSs report oversized cache
[WRN] MDS_CACHE_OVERSIZED: 1 MDSs report oversized cache
mds.storefs-b(mds.0): MDS cache is too large (7GB/4GB); 0 inodes in use by
clients,
Thanks! However, I still don't really understand why I am seeing this.
The first time I had this, one of the clients was a remote user dialling
in via VPN, which could indeed be laggy. But I am also seeing it from
neighbouring hosts that are on the same physical network with reliable
ping time
Hi Janek,
There was some documentation added about it here:
https://docs.ceph.com/en/pacific/cephfs/health-messages/
There is a description of what it means, and it's tied to an mds
configurable.
On Mon, Sep 18, 2023 at 10:51 AM Janek Bevendorff <
janek.bevendo...@uni-weimar.de> wrote:
> Hey al
Hey all,
Since the upgrade to Ceph 16.2.14, I keep seeing the following warning:
10 client(s) laggy due to laggy OSDs
ceph health detail shows it as:
[WRN] MDS_CLIENTS_LAGGY: 10 client(s) laggy due to laggy OSDs
mds.***(mds.3): Client *** is laggy; not evicted because some
OSD(s) is/are l
My guess is that this is because this setting can't be changed at
runtime, though if so that's a new enforcement behaviour in Quincy
that didn't exist in prior versions.
I think what you want to do is 'config set osd osd_op_queue wpq'
(assuming you want this set for all OSDs) and then restart your
thanks Shashi, this regression is tracked in
https://tracker.ceph.com/issues/62771. we're testing a fix
On Sat, Sep 16, 2023 at 7:32 PM Shashi Dahal wrote:
>
> Hi All,
>
> We have 3 openstack clusters, each with their own ceph. The openstack
> versions are identical( using openstack-ansible) an
One of our customers is currently facing a challenge in testing our
disaster recovery (DR) procedures on a pair of Ceph clusters (Quincy
version 17.2.5).
Our issue revolves around the need to resynchronize data after
conducting a DR procedure test. In small-scale scenarios, this may not
be a signi
On 13-09-2023 16:49, Stefan Kooman wrote:
On 13-09-2023 14:58, Ilya Dryomov wrote:
On Wed, Sep 13, 2023 at 9:20 AM Stefan Kooman wrote:
Hi,
Since the 6.5 kernel addressed the issue with regards to regression in
the readahead handling code... we went ahead and installed this kernel
for a coup
Found it. The target was not enabled:
root@0cc47a6df14e:~# systemctl status
ceph-03977a23-f00f-4bb0-b9a7-de57f40ba853.target
● ceph-03977a23-f00f-4bb0-b9a7-de57f40ba853.target - Ceph cluster
03977a23-f00f-4bb0-b9a7-de57f40ba853
Loaded: loaded
(/etc/systemd/system/ceph-03977a23-f00f-4bb0-b9a7-d
Hi Michal,
I dont see any errors on versions 17.2.6 and 18.2.0 using veeam 12.0.0.1420.
In my setup, I am using a dedicated nginx proxy (not managed by ceph) to reach
the upstream rgw instances.
BR
Wolfgang
-Ursprüngliche Nachricht-
Von: Michal Strnad
Gesendet: Sonntag, 17. September 20
Hi,
After upgrading our cluster to 17.2.6 all OSDs appear to have "osd_op_queue":
"mclock_scheduler" (used to be wpq). As we see several OSDs reporting
unjustifiable heavy load, we would like to revert this back to "wpq" but any
attempt yells the following error:
root@store14:~# ceph tell osd.
Hello,
a note: we are running IPv6 only clusters since 2017, in case anyone has
questions. In earlier releases no tunings were necessary, later releases
need the bind parameters.
BR,
Nico
Stefan Kooman writes:
> On 15-09-2023 09:25, Robert Sander wrote:
>> Hi,
>> as the documentation sends m
On 15-09-2023 09:25, Robert Sander wrote:
Hi,
as the documentation sends mixed signals in
https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/#ipv4-ipv6-dual-stack-mode
"Note
Binding to IPv4 is enabled by default, so if you just add the option to
bind to IPv6 you’ll actual
I think this is related to my radosgw-exporter, not related to ceph, I'll
report it in git, sorry for the noise.
From: Szabo, Istvan (Agoda)
Sent: Monday, September 18, 2023 1:58 PM
To: Ceph Users
Subject: [ceph-users] radosgw bucket usage metrics gone after cr
19 matches
Mail list logo