[ceph-users] Re: Issue about execute "ceph fs new"

2024-04-10 Thread elite_stu
Thanks for your information, I tried several solutions but it not working then I reinstalled, the issue wasn't appear again.. should be something wrong when install. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-

[ceph-users] Re: Issue about execute "ceph fs new"

2024-04-06 Thread Eugen Block
Sorry, I hit send too early, to enable multi-active MDS the full command is: ceph fs flag set enable_multiple true Zitat von Eugen Block : Did you enable multi-active MDS? Can you please share 'ceph fs dump'? Port 6789 is the MON port (v1, v2 is 3300). If you haven't enabled multi-active, r

[ceph-users] Re: Issue about execute "ceph fs new"

2024-04-06 Thread Eugen Block
Did you enable multi-active MDS? Can you please share 'ceph fs dump'? Port 6789 is the MON port (v1, v2 is 3300). If you haven't enabled multi-active, run: ceph fs flag set enable_multiple Zitat von elite_...@163.com: I tried to remove the default fs then it works, but port 6789 still not

[ceph-users] Re: Issue about execute "ceph fs new"

2024-04-06 Thread elite_stu
I tried to remove the default fs then it works, but port 6789 still not able to telnet. ceph fs fail myfs ceph fs rm myfs --yes-i-really-mean-it bash-4.4$ bash-4.4$ ceph fs ls name: kingcephfs, metadata pool: cephfs-king-metadata, data pools: [cephfs-king-data ] bash-4.4$ bash-4.4$ bash-4.4$

[ceph-users] Re: Issue about execute "ceph fs new"

2024-04-06 Thread elite_stu
Thanks for your information, I tried to new some mds pods, but it seems the same issue. [root@vm-01 examples]# cat filesystem.yaml | grep activeCount activeCount: 3 [root@vm-01 examples]# [root@vm-01 examples]# kubectl get pod -nrook-ceph | grep mds rook-ceph-mds-myfs-a-6d46fcfd4c-lxc8m

[ceph-users] Re: Issue about execute "ceph fs new"

2024-04-03 Thread Eugen Block
Hi, you need to deploy more daemons because your current active MDS is responsible for the already existing CephFS. There are several ways to do this, I like the yaml file approach and increase the number of MDS daemons, just as an example from a test cluster with one CephFS I added the l