Hi Torkil,
I assume the affected OSDs were the ones with slow requests, no? You
should still see them in some of the logs (mon, mgr).
Zitat von Torkil Svensgaard :
On 06-04-2024 18:10, Torkil Svensgaard wrote:
Hi
Cephadm Reef 18.2.1
Started draining 5 18-20 TB HDD OSDs (DB/WAL om NVMe) o
On 06-04-2024 18:10, Torkil Svensgaard wrote:
Hi
Cephadm Reef 18.2.1
Started draining 5 18-20 TB HDD OSDs (DB/WAL om NVMe) on one host. Even
with osd_max_backfills at 1 the OSDs get slow ops from time to time
which seems odd as we recently did a huge reshuffle[1] involving the
same host with
ISTR that the Ceph slow op threshold defaults to 30 or 32 seconds. Naturally
an op over the threshold often means there are more below the reporting
threshold.
120s I think is the default Linux op timeout.
> On Apr 6, 2024, at 10:53 AM, David C. wrote:
>
> Hi,
>
> Do slow ops impact d
Sorry, I hit send too early, to enable multi-active MDS the full command is:
ceph fs flag set enable_multiple true
Zitat von Eugen Block :
Did you enable multi-active MDS? Can you please share 'ceph fs
dump'? Port 6789 is the MON port (v1, v2 is 3300). If you haven't
enabled multi-active, r
Did you enable multi-active MDS? Can you please share 'ceph fs dump'?
Port 6789 is the MON port (v1, v2 is 3300). If you haven't enabled
multi-active, run:
ceph fs flag set enable_multiple
Zitat von elite_...@163.com:
I tried to remove the default fs then it works, but port 6789 still
not
Hi
Cephadm Reef 18.2.1
Started draining 5 18-20 TB HDD OSDs (DB/WAL om NVMe) on one host. Even
with osd_max_backfills at 1 the OSDs get slow ops from time to time
which seems odd as we recently did a huge reshuffle[1] involving the
same host without seeing these slow ops.
I guess one differ
Hi,
Do slow ops impact data integrity => No
Can I generally ignore it => No :)
This means that some client transactions are blocked for 120 sec (that's a
lot).
This could be a lock on the client side (CephFS, essentially), an incident
on the infrastructure side (a disk about to fall, network inst
Dears ,
I have a question about cephfs port 6789 is with type of ClusterIP, then how
can i access it from external network?
[root@vm-01 examples]# kubectl get svc -nrook-ceph
NAME TYPECLUSTER-IP
EXTERNAL-IP PORT(S) AGE
rook-ceph-m
I tried to remove the default fs then it works, but port 6789 still not able to
telnet.
ceph fs fail myfs
ceph fs rm myfs --yes-i-really-mean-it
bash-4.4$
bash-4.4$ ceph fs ls
name: kingcephfs, metadata pool: cephfs-king-metadata, data pools:
[cephfs-king-data ]
bash-4.4$
bash-4.4$
bash-4.4$
Thanks for your information, I tried to new some mds pods, but it seems the
same issue.
[root@vm-01 examples]# cat filesystem.yaml | grep activeCount
activeCount: 3
[root@vm-01 examples]#
[root@vm-01 examples]# kubectl get pod -nrook-ceph | grep mds
rook-ceph-mds-myfs-a-6d46fcfd4c-lxc8m
Well, I've replaced the failed drives and that cleared the error. Arguably,
it was a better solution :-)
/Z
On Sat, 6 Apr 2024 at 10:13, wrote:
> did it help? Maybe you found a better solution?
> ___
> ceph-users mailing list -- ceph-users@ceph.io
>
did it help? Maybe you found a better solution?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello everyone,
any ideas? Even small hints would help a lot!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
13 matches
Mail list logo