Dear Ceph users,
after a host reboot one of the OSDs is now stuck down (and out). I tried
several times to restart it and even to reboot the host, but it still
remains down.
# ceph -s
cluster:
id: b1029256-7bb3-11ec-a8ce-ac1f6b627b45
health: HEALTH_WARN
4 OSD(s) have
Each array has different types. You need to look at the below cephfs-top
code and see how they are interpreted.
[1]
https://github.com/ceph/ceph/blob/main/src/tools/cephfs/top/cephfs-top#L66-L83
[2]
https://github.com/ceph/ceph/blob/main/src/tools/cephfs/top/cephfs-top#L641-L714
[3]
https://g
connections:
[root@et-uos-warm02 deeproute]# netstat -anltp | grep rados | grep
10.x.x.x:7480 | grep ESTAB | grep 10.12 | wc -l
6650
The prints:
tcp0 0 10.x.x.x:7480 10.x.x.12:40210 ESTABLISHED
76749/radosgw
tcp0 0 10.x.x.x:7480 10.x.x.12:33218
Hi Peter,
If you can reproduce and have debug symbols installed, I'd be interested
to see the output of this tool:
https://github.com/markhpc/uwpmp/
It might need slightly different compile instructions if you have a
newer version of go. I can send you an executable offline if needed.
S
Hello,
ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
There is a single (test) radosgw serving plenty of test traffic. When under
heavy req/s ("heavy" in a low sense, about 1k rq/s) it pretty reliably hangs:
low traffic threads seem to work (like handling occasio
Hello Team ,
I am trying to make a ceph cluster with 3 nodes running ubuntu 20.04 and is
configured using ceph-ansible , because it is a testing cluster OSD servers
are same which will run as monitors . while installation I am facing
following error : TASK [ceph-mon : ceph monitor mkfs with keyrin
Hi Stefan,
I was not able to reproduce the issue of not reconnecting after slow-down.
My steps are documented here:
https://gist.github.com/yuvalif/e58e264bafe847bc5196f95be0e704a2
Can you please share some of the radosgw logs after the broker is up again
and the reconnect fails?
Regardless, there
if you do a mgr failover ("ceph mgr fail") and wait a few minutes do the
issues clear out? I know there's a bug where removed mons get marked as
stray daemons while downsizing by multiple mons at once (cephadm might be
removing them too quickly, not totally sure of the cause) but doing a mgr
failov
+1 for this issue, i've managed to reproduce it on my test cluster.
Kind regards,
Nino Kotur
On Mon, Jun 12, 2023 at 2:54 PM farhad kh
wrote:
> i deployed the ceph cluster with 8 node (v17.2.6) and after add all of
> hosts, ceph create 5 mon daemon instances
> i try decrease that to 3 ins
i deployed the ceph cluster with 8 node (v17.2.6) and after add all of
hosts, ceph create 5 mon daemon instances
i try decrease that to 3 instance with ` ceph orch apply mon
--placement=label:mon,count:3 it worked, but after that i get error "2
stray daemons not managed by cephadm" .
But every ti
Hi all,
We are running 1 test cluster ceph with cephadm. Currently last pacific
(16.2.13).
We use cephadm to deploy keepalived:2.1.5 and HAProxy:2.3.
We have 3 VIPs, 1 for each instance of HAProxy.
But, we do not use the same network for managing the cluster and for the public
traffic.
We have
List objects with rados cmd:
"rados -p oss.rgw.buckets.index ls | grep
"c2af65dc-b456-4f5a-be6a-2a142adeea75.335721.1" | awk '{print "rados
listomapkeys -p oss.rgw.buckets.index "$1 }'|sh -x"
some objects like this, I don't know what's it
"?1000_DR-MON-1_20220307_224723/configs/lidars.cfgiUFWo
Good news: We haven't had any new fill-ups so far. On the contrary, the
pool size is as small as it's ever been (200GiB).
Bad news: The MDS are still acting strangely. I have very uneven session
load and I don't know where it comes from. ceph_mds_sessions_total_load
reports a number of 1.4 mil
Hi,
There's just one option for `session config` (or `client config` both are
same) as of now i.e. "timeout"
#> ceph tell mds.0 session config timeout
*Dhairya Parmar*
Associate Software Engineer, CephFS
On Mon, Jun 12, 2023 at 2:29 PM Denis Polom wrote:
> Hi,
>
> I didn't find any doc an
Hi,
I didn't find any doc and any way how to get to know valid options to
configure client session over mds socket:
#> ceph tell mds.mds1 session config
session config [] : Config a CephFS
client session
Any hint on this?
Thank you
___
ceph-
Hi,
yes, I've found that trick was I had to wait for about 15 sec to see the
metrics.
Now I can see some numbers. Are units there in miliseconds? And also I
see 2 numbers reported - is the first value actual and second is delta?
"client.4636": [
[
924,
4
Thank you for your information.
On Mon, Jun 12, 2023 at 9:35 AM Jonas Nemeiksis
wrote:
> Hi,
>
> The ceph daemon image build is deprecated. You can read here [1]
>
> [1] https://github.com/ceph/ceph-container/issues/2112
>
> On Sun, Jun 11, 2023 at 4:03 PM mahnoosh shahidi
> wrote:
>
>> Thanks
Hi,
can you check for snapshots in the trash namespace?
# rbd snap ls --all /
Instead of removing the feature try to remove the snapshot from trash
(if there are any).
Zitat von Adam Boyhan :
I have a small cluster on Pacific with roughly 600 RBD images. Out
of those 600 images I have
18 matches
Mail list logo