Re: [ceph-users] Ceph MDS remove

2015-02-24 Thread gian

Sorry,
forgot to mention that I'm running Ceph 0.87 on Centos 7.

On 24/02/2015 10:20, Xavier Villaneau wrote:

Hello,

I also had to remove the MDSs on a Giant test cluster a few days ago,
and stumbled upon the same problems.

Le 24/02/2015 09:58, ceph-users a écrit :

Hi all,

I've set up a ceph cluster using this playbook:
https://github.com/ceph/ceph-ansible

I've configured in my hosts list
[mdss]
hostname1
hostname2


I now need to remove this MDS from the cluster.
The only document I found is this:
http://www.sebastien-han.fr/blog/2012/07/04/remove-a-mds-server-from-a-ceph-cluster/


# service ceph -a stop mds
=== mds.z-srv-m-cph02 ===
Stopping Ceph mds.z-srv-m-cph02 on z-srv-m-cph02...done
=== mds.r-srv-m-cph02 ===
Stopping Ceph mds.r-srv-m-cph02 on r-srv-m-cph02...done
=== mds.r-srv-m-cph01 ===
Stopping Ceph mds.r-srv-m-cph01 on r-srv-m-cph01...done
=== mds.0 ===
Stopping Ceph mds.0 on zrh-srv-m-cph01...done
=== mds.192.168.0.1 ===
Stopping Ceph mds.192.168.0.1 on z-srv-m-cph01...done
=== mds.z-srv-m-cph01 ===
Stopping Ceph mds.z-srv-m-cph01 on z-srv-m-cph01...done

[root@z-srv-m-cph01 ceph]# ceph mds stat
e1: 0/0/0 up

1. question: why the MDS are not stopped?


I also had trouble stopping my MDS. They would start up again even if
I killed the processes… I suggest you try :
sudo stop ceph-mds-all


2. When I try to remove them:

# ceph mds rm mds.z-srv-m-cph01 z-srv-m-cph01
Invalid command: mds.z-srv-m-cph01 doesn't represent an int
mds rm   : remove nonactive mds
Error EINVAL: invalid command


In the mds rm command, the  refers to the ID of the metadata
pool used by CephFS (since there can only be one right now). And the
 is simply mds.n where n is 0, 1, etc. Maybe there are
other possible values for type.id, but it worked for me.


The ansible playbook created me a conf like this in ceph.conf:
[mds]

[mds.z-srv-m-cph01]
host = z-srv-m-cph01


I believe you'll also need to delete the [msd] section in ceph.conf,
but since I do not know much about ansible I can't give you more
advice on this.

Finally, as described on the blog post you linked, you need to reset
cephfs after (or the health will be complaining) :
ceph mds newfs   
--yes-i-really-mean-it


Regards,
--
Xavier


Can someone please help on this or at least give some hints?

Thank you very much
Gian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Permanente Mount RBD blocs device RHEL7

2015-03-07 Thread Gian
Hi,
are you using /etc/ceph/rbdmount as a 'mapping fstab' plus your mountpoints in 
normal fstab plus the systemctl service ?

Gian



> On 07 Mar 2015, at 05:26, Jesus Chavez (jeschave)  wrote:
> 
> Still not working does anybody know show to automap and Mount rbd image on 
> redhat?
> 
> Regards 
> 
> 
> Jesus Chavez
> SYSTEMS ENGINEER-C.SALES
> 
> jesch...@cisco.com
> Phone: +52 55 5267 3146
> Mobile: +51 1 5538883255
> 
> CCIE - 44433
> 
> On Mar 2, 2015, at 4:52 AM, Jesus Chavez (jeschave)  
> wrote:
> 
>> Thank you so much Alexandre! :)
>> 
>> 
>> Jesus Chavez
>> SYSTEMS ENGINEER-C.SALES
>> 
>> jesch...@cisco.com
>> Phone: +52 55 5267 3146
>> Mobile: +51 1 5538883255
>> 
>> CCIE - 44433
>> 
>> On Mar 2, 2015, at 4:26 AM, Alexandre DERUMIER  wrote:
>> 
>>> Hi,
>>> 
>>> maybe this can help you:
>>> 
>>> http://www.sebastien-han.fr/blog/2013/11/22/map-slash-unmap-rbd-device-on-boot-slash-shutdown/
>>> 
>>> 
>>> Regards,
>>> 
>>> Alexandre
>>> 
>>> - Mail original -
>>> De: "Jesus Chavez (jeschave)" 
>>> À: "ceph-users" 
>>> Envoyé: Lundi 2 Mars 2015 11:14:49
>>> Objet: [ceph-users] Permanente Mount RBD blocs device RHEL7
>>> 
>>> Hi all! I have been trying to get permanent my fs maked by the rbd device 
>>> mapping on rhel7 modifying /etc/fstab but everytime I reboot the server I 
>>> lose the mapping to the pool so the server gets stuck since It didnt find 
>>> the /dev/rbd0 device, does anybody know if there any procedure to not lose 
>>> the mapping or make the filesystem permanent? 
>>> 
>>> Thanks! 
>>> 
>>> 
>>> Jesus Chavez 
>>> SYSTEMS ENGINEER-C.SALES 
>>> 
>>> jesch...@cisco.com 
>>> Phone : +52 55 5267 3146 
>>> Mobile: +51 1 5538883255 
>>> 
>>> CCIE - 44433 
>>> 
>>> ___ 
>>> ceph-users mailing list 
>>> ceph-users@lists.ceph.com 
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Segfault after modifying CRUSHMAP

2015-03-19 Thread gian

Hi guys,

I was creating new buckets and adjusting the crush map when 1 monitor 
stopped replying.


The scenario is:
2 servers
2 MONs
21 OSDs each server

Error message in the mon.log:

NOTE: a copy of the executable, or `objdump -rdS ` is 
needed to interpret this.


I uploaded the stderr to:
http://ur1.ca/jxbrp

Does anybody have any idea?


Thank you,
Gian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Unable to create rbd snapshot on Centos 7

2015-03-20 Thread gian

Hi guys,

I'm trying to test rbd snapshot on a Centos 7.

# rbd -p rbd ls
test-a
test-b
test-c
test-d

# rbd snap create rbd/test-b@snap
rbd: failed to create snapshot: (22) Invalid argument
2015-03-20 15:22:56.300731 7f78f7afe880 -1 librbd: failed to create 
snap id: (22) Invalid argument



I tried the same exact command on a Ubuntu 14.04.2 LTS

# rbd snap create rbd/test-a@snap
# rbd snap ls --image test-a
SNAPID NAME SIZE
 2 snap 10240 MB


Does anyone have any clue?

Thank you,

Gianfranco
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com