Hi,
 I respond myself.

 I was not using the default ceph.conf file, mine was cephcloud.conf and I
do not way this new cephcloud.conf was not passed (I do not know if this
could be a bug).
 I was to run :
    sudo python /usr/sbin/ceph-create-keys -v -i "node_name" --cluster
cephcloud

In each machine, and then do the trick.
  Now I have another trouble with the OSD's, I was trying to use dirs for
this initial setup
This works fine:

 ceph-deploy --cluster cephcloud --overwrite-conf osd prepare
ceph02:/var/local/osd0 ceph03:/var/local/osd1

But when I try to activate:
[ceph@cephadm ceph-cloud]$ ceph-deploy --cluster cephcloud --overwrite-conf
osd activate ceph02:/var/local/osd0
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.5): /usr/bin/ceph-deploy --cluster
cephcloud --overwrite-conf osd activate ceph02:/var/local/osd0
[ceph_deploy.osd][DEBUG ] Activating cluster cephcloud disks
ceph02:/var/local/osd0:
[ceph02][DEBUG ] connected to host: ceph02
[ceph02][DEBUG ] detect platform information from remote host
[ceph02][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Scientific Linux 6.2 Carbon
[ceph_deploy.osd][DEBUG ] activating host ceph02 disk /var/local/osd0
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph02][INFO  ] Running command: sudo ceph-disk-activate --mark-init
sysvinit --mount /var/local/osd0
[ceph02][WARNIN] 2014-06-19 13:43:06.728349 7f78fb61b700 10 -- :/0 ready :/0
[ceph02][WARNIN] 2014-06-19 13:43:06.728473 7f78fb61b700  1 -- :/0
messenger.start
[ceph02][WARNIN] 2014-06-19 13:43:06.728626 7f78f8e17700 10 -- :/1013992
reaper_entry start
[ceph02][WARNIN] 2014-06-19 13:43:06.728740 7f78f8e17700 10 -- :/1013992
reaper
[ceph02][WARNIN] 2014-06-19 13:43:06.728800 7f78f8e17700 10 -- :/1013992
reaper done
[ceph02][WARNIN] 2014-06-19 13:43:06.729145 7f78fb61b700 10 -- :/1013992
connect_rank to 10.10.3.2:6789/0, creating pipe and registering
.......
.......
[ceph02][WARNIN] 2014-06-19 13:43:10.965713 7f83126a97a0 -1
filestore(/var/local/osd0) mkjournal error creating journal on
/var/local/osd0/journal: (28) No space left on device
[ceph02][WARNIN] 2014-06-19 13:43:10.965806 7f83126a97a0 -1 OSD::mkfs:
ObjectStore::mkfs failed with error -28
[ceph02][WARNIN] 2014-06-19 13:43:10.965935 7f83126a97a0 -1  ** ERROR:
error creating empty object store in /var/local/osd0: (28) No space left on
device
[ceph02][WARNIN] Traceback (most recent call last):
[ceph02][WARNIN]   File "/usr/sbin/ceph-disk", line 2579, in <module>
[ceph02][WARNIN]     main()
[ceph02][WARNIN]   File "/usr/sbin/ceph-disk", line 2557, in main
[ceph02][WARNIN]     args.func(args)
[ceph02][WARNIN]   File "/usr/sbin/ceph-disk", line 1917, in main_activate
[ceph02][WARNIN]     init=args.mark_init,
[ceph02][WARNIN]   File "/usr/sbin/ceph-disk", line 1749, in activate_dir
[ceph02][WARNIN]     (osd_id, cluster) = activate(path,
activate_key_template, init)
[ceph02][WARNIN]   File "/usr/sbin/ceph-disk", line 1849, in activate
[ceph02][WARNIN]     keyring=keyring,
[ceph02][WARNIN]   File "/usr/sbin/ceph-disk", line 1484, in mkfs
[ceph02][WARNIN]     '--keyring', os.path.join(path, 'keyring'),
[ceph02][WARNIN]   File "/usr/sbin/ceph-disk", line 303, in
command_check_call
[ceph02][WARNIN]     return subprocess.check_call(arguments)
[ceph02][WARNIN]   File "/usr/lib64/python2.6/subprocess.py", line 505, in
check_call
[ceph02][WARNIN]     raise CalledProcessError(retcode, cmd)
[ceph02][WARNIN] subprocess.CalledProcessError: Command
'['/usr/bin/ceph-osd', '--cluster', 'cephcloud', '--mkfs', '--mkkey', '-i',
'1', '--monmap', '/var/local/osd0/activate.monmap', '--osd-data',
'/var/local/osd0', '--osd-journal', '/var/local/osd0/journal',
'--osd-uuid', '6e321e92-ec7b-4ae0-80f6-68e3ece84b22', '--keyring',
'/var/local/osd0/keyring']' returned non-zero exit status 1
[ceph02][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
ceph-disk-activate --mark-init sysvinit --mount /var/local/osd0

There is no way to say OSD not to use the entire disk?

Filesystem      Size[ceph@ceph02 ~]$ df -h
  Used Avail Use% Mounted on
/dev/xvda1      5.0G  5.0G     0 100% /
tmpfs           935M     0  935M   0% /dev/shm

[ceph@ceph02 ~]$ du -chs /var/local/osd0/
2.9G    /var/local/osd0/
2.9G    total

[ceph@ceph02 ~]$ ls -la /var/local/osd0/
total 2964188
drwxr-xr-x 3 root root       4096 Jun 19 13:43 .
drwxr-xr-x 3 root root       4096 Jun 19 12:06 ..
-rw-r--r-- 1 root root        485 Jun 19 13:44 activate.monmap
-rw-r--r-- 1 root root         37 Jun 19 13:38 ceph_fsid
drwxr-xr-x 3 root root       4096 Jun 19 13:43 current
-rw-r--r-- 1 root root         37 Jun 19 13:38 fsid
-rw-r--r-- 1 root root 5368709120 Jun 19 13:43 journal
-rw-r--r-- 1 root root         21 Jun 19 13:38 magic
-rw-r--r-- 1 root root          4 Jun 19 13:43 store_version
-rw-r--r-- 1 root root         42 Jun 19 13:43 superblock
-rw-r--r-- 1 root root          2 Jun 19 13:43 whoami

regards, I


2014-06-19 10:36 GMT+02:00 Iban Cabrillo <cabri...@ifca.unican.es>:

> Hi,
>  I am really newbie on ceph.
>  I was trying to deploy a ceph-test on SL6.2, package installation has
> been OK.
>  I have create a initial cluster with 3 machines (cephadm, ceph02 and
> ceph03), ssh passwdless using ceph user is ok
>
>  using a config file: cephcloud.conf
>
>  [global]
> auth_service_required = cephx
> filestore_xattr_use_omap = true
> auth_client_required = cephx
> auth_cluster_required = cephx
> mon_host = 10.10.3.1,10.10.3.2,10.10.3.3  # are the correct internal ip
> mon_initial_members = cephadm, ceph02, ceph03
> fsid = eaf41d58-9014-4575-97fe-14cc104a3221
>
> pakage installation using the ceph-deploy is OK:
> ....
>  Running command: sudo ceph --version
> [ceph02][DEBUG ] ceph version 0.80.1
> (a38fe1169b6d2ac98b427334c12d7cf81f809b74)
> ......
>
> But when I try to start the mon they never come online:
>
> using cehp-deploy --cluster cephcloud mon create-initial
>
> [cephadm][INFO  ] Running command: sudo /sbin/service ceph -c
> /etc/ceph/cephcloud.conf start mon.cephadm
> [cephadm][INFO  ] Running command: sudo ceph --cluster=cephcloud
> --admin-daemon /var/run/ceph/cephcloud-mon.cephadm.asok mon_status
> [cephadm][ERROR ] admin_socket: exception getting command descriptions:
> [Errno 2] No such file or directory
> [cephadm][WARNIN] monitor: mon.cephadm, might not be running yet
>
> The same using the ceph-deploy --cluster=cephcloud mon create cephadm
>
>
> running directly the command:
> sudo /sbin/service ceph -c /etc/ceph/cephcloud.conf start mon.cephadm
>
> [ceph@cephadm ceph-cloud]$ sudo /sbin/service ceph -c
> /etc/ceph/cephcloud.conf start mon.cephadm
> [ceph@cephadm ceph-cloud]$ sudo /sbin/service ceph --verbose -c
> /etc/ceph/cephcloud.conf start mon.cephadm
> /usr/bin/ceph-conf -c /etc/ceph/cephcloud.conf -n mon.cephadm "user"
>
>
> /var/log/ceph/cephcloud-mon.cephadm.log is empty
>
> The same is happening on the other two test machines (ceph02 and ceph03)
>
> What am I doing wrong?
>
> regards, I
>
> *"El problema con el mundo es que los estúpidos están seguros de todo y
> los inteligentes están llenos de dudas*"
>



-- 
############################################################################
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
############################################################################
Bertrand Russell:
*"El problema con el mundo es que los estúpidos están seguros de todo y los
inteligentes están llenos de dudas*"
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to