Hi Udo,
I try that also but failed. Here are the steps that I made, the
strange thing is that when run the "prepare" commad, it finished ok,
but.. if i take a look into the log files, i found this also:
ceph@cephbkdeploy01:~/desp-bkp-cluster$ ceph-deploy --overwrite-conf
osd prepare ceph-bkp-osd01:/dev/sdf:/dev/sdc
[ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin/ceph-deploy
--overwrite-conf osd prepare ceph-bkp-osd01:/dev/sdf:/dev/sdc
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
ceph-bkp-osd01:/dev/sdf:/dev/sdc
[ceph-bkp-osd01][DEBUG ] connected to host: ceph-bkp-osd01
[ceph-bkp-osd01][DEBUG ] detect platform information from remote host
[ceph-bkp-osd01][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-bkp-osd01
[ceph-bkp-osd01][DEBUG ] write cluster configuration to
/etc/ceph/{cluster}.conf
[ceph-bkp-osd01][INFO ] Running command: sudo udevadm trigger
--subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph-bkp-osd01 disk /dev/sdf
journal /dev/sdc activate False
[ceph-bkp-osd01][INFO ] Running command: sudo ceph-disk-prepare
--fs-type xfs --cluster ceph -- /dev/sdf /dev/sdc
[ceph-bkp-osd01][WARNIN] libust[16678/16678]: Warning: HOME
environment variable not set. Disabling LTTng-UST per-user tracing.
(in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] libust[16696/16696]: Warning: HOME
environment variable not set. Disabling LTTng-UST per-user tracing.
(in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] WARNING:ceph-disk:OSD will not be
hot-swappable if journal is not the same device as the osd data
[ceph-bkp-osd01][DEBUG ] The operation has completed successfully.
[ceph-bkp-osd01][DEBUG ] The operation has completed successfully.
[ceph-bkp-osd01][DEBUG ] meta-data=/dev/sdf1 isize=2048
agcount=4, agsize=183141597 blks
[ceph-bkp-osd01][DEBUG ] = sectsz=512
attr=2, projid32bit=0
[ceph-bkp-osd01][DEBUG ] data = bsize=4096
blocks=732566385, imaxpct=5
[ceph-bkp-osd01][DEBUG ] = sunit=0
swidth=0 blks
[ceph-bkp-osd01][DEBUG ] naming =version 2 bsize=4096
ascii-ci=0
[ceph-bkp-osd01][DEBUG ] log =internal log bsize=4096
blocks=357698, version=2
[ceph-bkp-osd01][DEBUG ] = sectsz=512
sunit=0 blks, lazy-count=1
[ceph-bkp-osd01][DEBUG ] realtime =none extsz=4096
blocks=0, rtextents=0
[ceph-bkp-osd01][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][DEBUG ] Host ceph-bkp-osd01 is now ready for osd
use.
ceph@cephbkdeploy01:~/desp-bkp-cluster$
Log:
ceph@ceph-bkp-osd01:/var/log/ceph$ cat ceph-osd.0.log
2014-11-03 07:25:37.970952 7f89308c2900 0 ceph version 0.87
(c51c8f9d80fa4e0168aa52685b8de40e42758578), process ceph-osd, pid
16955
2014-11-03 07:25:37.973153 7f89308c2900 1
filestore(/var/lib/ceph/tmp/mnt.cCPLuU) mkfs in
/var/lib/ceph/tmp/mnt.cCPLuU
2014-11-03 07:25:37.973181 7f89308c2900 1
filestore(/var/lib/ceph/tmp/mnt.cCPLuU) mkfs fsid is already set to
bd57cb01-54f8-4eeb-9e31-f5c6ad2de103
2014-11-03 07:25:38.057145 7f89308c2900 0
filestore(/var/lib/ceph/tmp/mnt.cCPLuU) backend xfs (magic 0x58465342)
2014-11-03 07:25:38.057160 7f89308c2900 1
filestore(/var/lib/ceph/tmp/mnt.cCPLuU) disabling 'filestore replica
fadvise' due to known issues with fadvise(DONTNEED) on xfs
2014-11-03 07:25:38.132531 7f89308c2900 1
filestore(/var/lib/ceph/tmp/mnt.cCPLuU) leveldb db exists/created
2014-11-03 07:25:38.132792 7f89308c2900 -1
filestore(/var/lib/ceph/tmp/mnt.cCPLuU) mkjournal error creating
journal on /var/lib/ceph/tmp/mnt.cCPLuU/journal: (22) Invalid argument
2014-11-03 07:25:38.132831 7f89308c2900 -1 OSD::mkfs:
ObjectStore::mkfs failed with error -22
2014-11-03 07:25:38.132904 7f89308c2900 -1 ** ERROR: error creating
empty object store in /var/lib/ceph/tmp/mnt.cCPLuU: (22) Invalid
argument
ceph@ceph-bkp-osd01:/var/log/ceph$
Again if i run the "activate" commando it failed with:
ceph@cephbkdeploy01:~/desp-bkp-cluster$ ceph-deploy --overwrite-conf
osd activate ceph-bkp-osd01:/dev/sdf1:/dev/sdc1
[ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin/ceph-deploy
--overwrite-conf osd activate ceph-bkp-osd01:/dev/sdf1:/dev/sdc1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
ceph-bkp-osd01:/dev/sdf1:/dev/sdc1
[ceph-bkp-osd01][DEBUG ] connected to host: ceph-bkp-osd01
[ceph-bkp-osd01][DEBUG ] detect platform information from remote host
[ceph-bkp-osd01][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host ceph-bkp-osd01 disk
/dev/sdf1
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[ceph-bkp-osd01][INFO ] Running command: sudo ceph-disk-activate
--mark-init upstart --mount /dev/sdf1
[ceph-bkp-osd01][WARNIN] libust[17725/17725]: Warning: HOME
environment variable not set. Disabling LTTng-UST per-user tracing.
(in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] libust[17728/17728]: Warning: HOME
environment variable not set. Disabling LTTng-UST per-user tracing.
(in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] got monmap epoch 1
[ceph-bkp-osd01][WARNIN] libust[17759/17759]: Warning: HOME
environment variable not set. Disabling LTTng-UST per-user tracing.
(in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] 2014-11-03 07:37:17.231850 7f0e0a1d0900 -1
journal FileJournal::_open: disabling aio for non-block journal. Use
journal_force_aio to force use of aio anyway
[ceph-bkp-osd01][WARNIN] 2014-11-03 07:37:17.231880 7f0e0a1d0900 -1
journal check: ondisk fsid 00000000-0000-0000-0000-000000000000
doesn't match expected 93ecbc87-9a8d-479f-8969-e79175a32048, invalid
(someone else's?) journal
[ceph-bkp-osd01][WARNIN] 2014-11-03 07:37:17.231907 7f0e0a1d0900 -1
filestore(/var/lib/ceph/tmp/mnt.MW51n4) mkjournal error creating
journal on /var/lib/ceph/tmp/mnt.MW51n4/journal: (22) Invalid argument
[ceph-bkp-osd01][WARNIN] 2014-11-03 07:37:17.231925 7f0e0a1d0900 -1
OSD::mkfs: ObjectStore::mkfs failed with error -22
[ceph-bkp-osd01][WARNIN] 2014-11-03 07:37:17.231960 7f0e0a1d0900 -1
** ERROR: error creating empty object store in
/var/lib/ceph/tmp/mnt.MW51n4: (22) Invalid argument
[ceph-bkp-osd01][WARNIN] ERROR:ceph-disk:Failed to activate
[ceph-bkp-osd01][WARNIN] Traceback (most recent call last):
[ceph-bkp-osd01][WARNIN] File "/usr/sbin/ceph-disk", line 2792, in
<module>
[ceph-bkp-osd01][WARNIN] main()
[ceph-bkp-osd01][WARNIN] File "/usr/sbin/ceph-disk", line 2770, in
main
[ceph-bkp-osd01][WARNIN] args.func(args)
[ceph-bkp-osd01][WARNIN] File "/usr/sbin/ceph-disk", line 2004, in
main_activate
[ceph-bkp-osd01][WARNIN] init=args.mark_init,
[ceph-bkp-osd01][WARNIN] File "/usr/sbin/ceph-disk", line 1778, in
mount_activate
[ceph-bkp-osd01][WARNIN] (osd_id, cluster) = activate(path,
activate_key_template, init)
[ceph-bkp-osd01][WARNIN] File "/usr/sbin/ceph-disk", line 1943, in
activate
[ceph-bkp-osd01][WARNIN] keyring=keyring,
[ceph-bkp-osd01][WARNIN] File "/usr/sbin/ceph-disk", line 1573, in
mkfs
[ceph-bkp-osd01][WARNIN] '--keyring', os.path.join(path,
'keyring'),
[ceph-bkp-osd01][WARNIN] File "/usr/sbin/ceph-disk", line 316, in
command_check_call
[ceph-bkp-osd01][WARNIN] return subprocess.check_call(arguments)
[ceph-bkp-osd01][WARNIN] File "/usr/lib/python2.7/subprocess.py",
line 540, in check_call
[ceph-bkp-osd01][WARNIN] raise CalledProcessError(retcode, cmd)
[ceph-bkp-osd01][WARNIN] subprocess.CalledProcessError: Command
'['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i',
'1', '--monmap', '/var/lib/ceph/tmp/mnt.MW51n4/activate.monmap',
'--osd-data', '/var/lib/ceph/tmp/mnt.MW51n4', '--osd-journal',
'/var/lib/ceph/tmp/mnt.MW51n4/journal', '--osd-uuid',
'93ecbc87-9a8d-479f-8969-e79175a32048', '--keyring',
'/var/lib/ceph/tmp/mnt.MW51n4/keyring']' returned non-zero exit status
1
[ceph-bkp-osd01][ERROR ] RuntimeError: command returned non-zero exit
status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
ceph-disk-activate --mark-init upstart --mount /dev/sdf1
Any ideas?
Thanks in advance,
Best regards,
German Anders
--- Original message ---
Asunto: Re: [ceph-users] question about activate OSD
De: Udo Lembke <ulem...@polarzone.de>
Para: <ceph-users@lists.ceph.com>
Fecha: Friday, 31/10/2014 19:38
Hi German,
if i'm right the journal-creation on /dev/sdc1 failed (perhaps
because you only say /dev/sdc instead of /dev/sdc1?).
Do you have partitions on sdc?
Udo
On 31.10.2014 22:02, German Anders wrote:
Hi all,
I'm having some issues while trying to activate a new osd
in a new cluster, the prepare command run fine, but then the
activate command failed:
ceph@cephbkdeploy01:~/desp-bkp-cluster$ ceph-deploy
--overwrite-conf disk prepare --fs-type btrfs
ceph-bkp-osd01:sdf:/dev/sdc
[ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin/ceph-deploy
--overwrite-conf disk prepare --fs-type btrfs
ceph-bkp-osd01:sdf:/dev/sdc
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
ceph-bkp-osd01:/dev/sdf:/dev/sdc
[ceph-bkp-osd01][DEBUG ] connected to host: ceph-bkp-osd01
[ceph-bkp-osd01][DEBUG ] detect platform information from remote
host
[ceph-bkp-osd01][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-bkp-osd01
[ceph-bkp-osd01][DEBUG ] write cluster configuration to
/etc/ceph/{cluster}.conf
[ceph-bkp-osd01][INFO ] Running command: sudo udevadm trigger
--subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph-bkp-osd01 disk
/dev/sdf journal /dev/sdc activate False
[ceph-bkp-osd01][INFO ] Running command: sudo ceph-disk-prepare
--fs-type btrfs --cluster ceph -- /dev/sdf /dev/sdc
[ceph-bkp-osd01][WARNIN] libust[13609/13609]: Warning: HOME
environment variable not set. Disabling LTTng-UST per-user
tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] libust[13627/13627]: Warning: HOME
environment variable not set. Disabling LTTng-UST per-user
tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] WARNING:ceph-disk:OSD will not be
hot-swappable if journal is not the same device as the osd data
[ceph-bkp-osd01][WARNIN] Turning ON incompat feature 'extref':
increased hardlink limit per file to 65536
[ceph-bkp-osd01][DEBUG ] Creating new GPT entries.
[ceph-bkp-osd01][DEBUG ] The operation has completed
successfully.
[ceph-bkp-osd01][DEBUG ] Creating new GPT entries.
[ceph-bkp-osd01][DEBUG ] The operation has completed
successfully.
[ceph-bkp-osd01][DEBUG ]
[ceph-bkp-osd01][DEBUG ] WARNING! - Btrfs v3.12 IS EXPERIMENTAL
[ceph-bkp-osd01][DEBUG ] WARNING! - see http://btrfs.wiki.kernel.org
before using
[ceph-bkp-osd01][DEBUG ]
[ceph-bkp-osd01][DEBUG ] fs created label (null) on /dev/sdf1
[ceph-bkp-osd01][DEBUG ] nodesize 32768 leafsize 32768
sectorsize 4096 size 2.73TiB
[ceph-bkp-osd01][DEBUG ] Btrfs v3.12
[ceph-bkp-osd01][DEBUG ] The operation has completed
successfully.
[ceph_deploy.osd][DEBUG ] Host ceph-bkp-osd01 is now ready for
osd use.
ceph@cephbkdeploy01:~/desp-bkp-cluster$
ceph@cephbkdeploy01:~/desp-bkp-cluster$ ceph-deploy
--overwrite-conf disk activate --fs-type btrfs
ceph-bkp-osd01:sdf1:/dev/sdc1
[ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin/ceph-deploy
--overwrite-conf disk activate --fs-type btrfs
ceph-bkp-osd01:sdf1:/dev/sdc1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
ceph-bkp-osd01:/dev/sdf1:/dev/sdc1
[ceph-bkp-osd01][DEBUG ] connected to host: ceph-bkp-osd01
[ceph-bkp-osd01][DEBUG ] detect platform information from remote
host
[ceph-bkp-osd01][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host ceph-bkp-osd01 disk
/dev/sdf1
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[ceph-bkp-osd01][INFO ] Running command: sudo
ceph-disk-activate --mark-init upstart --mount /dev/sdf1
[ceph-bkp-osd01][WARNIN] libust[14025/14025]: Warning: HOME
environment variable not set. Disabling LTTng-UST per-user
tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] libust[14028/14028]: Warning: HOME
environment variable not set. Disabling LTTng-UST per-user
tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] got monmap epoch 1
[ceph-bkp-osd01][WARNIN] libust[14059/14059]: Warning: HOME
environment variable not set. Disabling LTTng-UST per-user
tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] 2014-10-31 17:00:10.936163 7ffb41d32900
-1 journal FileJournal::_open: disabling aio for non-block
journal. Use journal_force_aio to force use of aio anyway
[ceph-bkp-osd01][WARNIN] 2014-10-31 17:00:10.936221 7ffb41d32900
-1 journal check: ondisk fsid
00000000-0000-0000-0000-000000000000 doesn't match expected
6a26ef1f-6ece-4383-8304-7a8d064ef2b4, invalid (someone else's?)
journal
[ceph-bkp-osd01][WARNIN] 2014-10-31 17:00:10.936275 7ffb41d32900
-1 filestore(/var/lib/ceph/tmp/mnt.vt_waK) mkjournal error
creating journal on /var/lib/ceph/tmp/mnt.vt_waK/journal: (22)
Invalid argument
[ceph-bkp-osd01][WARNIN] 2014-10-31 17:00:10.936310 7ffb41d32900
-1 OSD::mkfs: ObjectStore::mkfs failed with error -22
[ceph-bkp-osd01][WARNIN] 2014-10-31 17:00:10.936389 7ffb41d32900
-1 ** ERROR: error creating empty object store in
/var/lib/ceph/tmp/mnt.vt_waK: (22) Invalid argument
[ceph-bkp-osd01][WARNIN] ERROR:ceph-disk:Failed to activate
[ceph-bkp-osd01][WARNIN] Traceback (most recent call last):
[ceph-bkp-osd01][WARNIN] File "/usr/sbin/ceph-disk", line
2792, in <module>
[ceph-bkp-osd01][WARNIN] main()
[ceph-bkp-osd01][WARNIN] File "/usr/sbin/ceph-disk", line
2770, in main
[ceph-bkp-osd01][WARNIN] args.func(args)
[ceph-bkp-osd01][WARNIN] File "/usr/sbin/ceph-disk", line
2004, in main_activate
[ceph-bkp-osd01][WARNIN] init=args.mark_init,
[ceph-bkp-osd01][WARNIN] File "/usr/sbin/ceph-disk", line
1778, in mount_activate
[ceph-bkp-osd01][WARNIN] (osd_id, cluster) = activate(path,
activate_key_template, init)
[ceph-bkp-osd01][WARNIN] File "/usr/sbin/ceph-disk", line
1943, in activate
[ceph-bkp-osd01][WARNIN] keyring=keyring,
[ceph-bkp-osd01][WARNIN] File "/usr/sbin/ceph-disk", line
1573, in mkfs
[ceph-bkp-osd01][WARNIN] '--keyring', os.path.join(path,
'keyring'),
[ceph-bkp-osd01][WARNIN] File "/usr/sbin/ceph-disk", line 316,
in command_check_call
[ceph-bkp-osd01][WARNIN] return
subprocess.check_call(arguments)
[ceph-bkp-osd01][WARNIN] File
"/usr/lib/python2.7/subprocess.py", line 540, in check_call
[ceph-bkp-osd01][WARNIN] raise CalledProcessError(retcode,
cmd)
[ceph-bkp-osd01][WARNIN] subprocess.CalledProcessError: Command
'['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey',
'-i', '7', '--monmap',
'/var/lib/ceph/tmp/mnt.vt_waK/activate.monmap', '--osd-data',
'/var/lib/ceph/tmp/mnt.vt_waK', '--osd-journal',
'/var/lib/ceph/tmp/mnt.vt_waK/journal', '--osd-uuid',
'6a26ef1f-6ece-4383-8304-7a8d064ef2b4', '--keyring',
'/var/lib/ceph/tmp/mnt.vt_waK/keyring']' returned non-zero exit
status 1
[ceph-bkp-osd01][ERROR ] RuntimeError: command returned non-zero
exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
ceph-disk-activate --mark-init upstart --mount /dev/sdf1
ceph@cephbkdeploy01:~/desp-bkp-cluster$
I'm using Ubuntu 14.04 LTS with kernel 3.13.0-24-generic and
Ceph version 0.87 (dev)
Any ideas?
Thanks in advance,
Best regards,
German Anders
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com