Thank you John.
I've solved the issue.
# ceph mds dump
dumped mdsmap epoch 128
epoch 128
flags 0
created 2015-02-24 15:55:10.631958
modified2015-02-25 17:22:20.946910
tableserver 0
root0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure
On 25/02/2015 15:23, ceph-users wrote:
# ceph mds rm 23432 mds.'192.168.0.1'
Traceback (most recent call last):
File "/bin/ceph", line 862, in
sys.exit(main())
File "/bin/ceph", line 805, in main
sigdict, inbuf, verbose)
File "/bin/ceph", line 405, in new_style_command
valid_d
Ok John.
Recap:
If I have this situation:
# ceph mds dump
dumped mdsmap epoch 84
epoch 84
flags 0
created 2015-02-24 15:55:10.631958
modified2015-02-25 16:18:23.019144
tableserver 0
root0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure
On 25/02/2015 14:21, ceph-users wrote:
Hi John,
question: how to I retrieve the gid number?
Ah, I thought I had mentioned that in the previous email, but now I
realise I left that important detail out! Here's what I meant to write:
When you do "ceph mds dump", if there are any up daemons, yo
Hi John,
question: how to I retrieve the gid number?
Thank you,
Gian
On 24/02/2015 09:58, ceph-users wrote:
Hi all,
I've set up a ceph cluster using this playbook:
https://github.com/ceph/ceph-ansible
I've configured in my hosts list
[mdss]
hostname1
hostname2
I now need to remove thi
On 24/02/2015 22:32, ceph-users wrote:
# ceph mds rm mds.-1.0
Invalid command: mds.-1.0 doesn't represent an int
mds rm : remove nonactive mds
Error EINVAL: invalid command
Any clue?
Thanks
Gian
See my previous message about use of "mds rm": you need to pass it a GID.
However, in this
How can I remove the 2nd MDS:
# ceph mds dump
dumped mdsmap epoch 72
epoch 72
flags 0
created 2015-02-24 15:55:10.631958
modified2015-02-24 17:58:49.400841
tableserver 0
root0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure62
last_f
On 24/02/2015 09:20, Xavier Villaneau wrote:
[root@z-srv-m-cph01 ceph]# ceph mds stat
e1: 0/0/0 up
1. question: why the MDS are not stopped?
This is just confusing formatting. 0/0/0 means 0 up, 0 in, max_mds=0.
This status indicates that you have no filesystem at all.
2. When I try to
Sorry,
forgot to mention that I'm running Ceph 0.87 on Centos 7.
On 24/02/2015 10:20, Xavier Villaneau wrote:
Hello,
I also had to remove the MDSs on a Giant test cluster a few days ago,
and stumbled upon the same problems.
Le 24/02/2015 09:58, ceph-users a écrit :
Hi all,
I've set up a ceph
Sorry,
forgot to mention that I'm running Ceph 0.87 on Centos 7.
On 24/02/2015 10:20, Xavier Villaneau wrote:
Hello,
I also had to remove the MDSs on a Giant test cluster a few days ago,
and stumbled upon the same problems.
Le 24/02/2015 09:58, ceph-users a écrit :
Hi all,
I've set up a ceph
Hello,
I also had to remove the MDSs on a Giant test cluster a few days ago,
and stumbled upon the same problems.
Le 24/02/2015 09:58, ceph-users a écrit :
Hi all,
I've set up a ceph cluster using this playbook:
https://github.com/ceph/ceph-ansible
I've configured in my hosts list
[mdss]
ho
Hi all,
I've set up a ceph cluster using this playbook:
https://github.com/ceph/ceph-ansible
I've configured in my hosts list
[mdss]
hostname1
hostname2
I now need to remove this MDS from the cluster.
The only document I found is this:
http://www.sebastien-han.fr/blog/2012/07/04/remove-a-m
Thank you very much, it's what I needed.
root@a-mon:~# ceph mds remove_data_pool 3
removed data pool 3 from mdsmap
It worked, and mds is ok.
--
Thomas Lemarchand
Cloud Solutions SAS - Responsable des systèmes d'information
On mer., 2014-10-01 at 17:02 +0100, John Spray wrote:
> Thomas,
>
>
Thomas,
Sounds like you're looking for "ceph mds remove_data_pool". In
general you would do that *before* removing the pool itself (in more
recent versions we enforce that).
John
On Wed, Oct 1, 2014 at 4:58 PM, Thomas Lemarchand
wrote:
> Hello everyone,
>
> I plan to use CephFS in production w
Hello everyone,
I plan to use CephFS in production with Giant release, knowing it's not
perfectly ready at the moment and using a hot backup.
That said, I'm currently testing CephFS on version 0.80.5.
I have a 7 servers cluster (3 mon, 3 osd, 1 mon), and 30 osd (disks).
My mds has been working f
15 matches
Mail list logo