I've a similar problem while trying to run the prepare osd command:

ceph version: infernalis 9.2.0

disk: /dev/sdf (745.2G)
          /dev/sdf1 740.2G
          /dev/sdf2 5G

# parted /dev/sdf
GNU Parted 2.3
Using /dev/sdf
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: ATA INTEL SSDSC2BB80 (scsi)
Disk /dev/sdf: 800GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End     Size    File system  Name          Flags
 2      1049kB  5369MB  5368MB               ceph journal
 1      5370MB  800GB   795GB   btrfs        ceph data


cibn05:


$ ceph-deploy osd prepare --fs-type btrfs cibn05:sdf
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /usr/local/bin/ceph-deploy osd
prepare --fs-type btrfs cibn05:sdf
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('cibn05',
'/dev/sdf', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               :
/etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
<ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbb1df85830>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : btrfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at
0x7fbb1e1d9050>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks cibn05:/dev/sdf:
[cibn05][DEBUG ] connection detected need for sudo
[cibn05][DEBUG ] connected to host: cibn05
[cibn05][DEBUG ] detect platform information from remote host
[cibn05][DEBUG ] detect machine type
[cibn05][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to cibn05
[cibn05][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[cibn05][INFO  ] Running command: sudo udevadm trigger
--subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host cibn05 disk /dev/sdf journal None
activate False
[cibn05][INFO  ] Running command: sudo ceph-disk -v prepare --cluster ceph
--fs-type btrfs -- /dev/sdf
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--check-allows-journal -i 0 --cluster ceph
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--check-wants-journal -i 0 --cluster ceph
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--check-needs-journal -i 0 --cluster ceph
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is
/sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is
/sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is
/sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf1 uuid path is
/sys/dev/block/8:81/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf2 uuid path is
/sys/dev/block/8:82/dm/uuid
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=fsid
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_mkfs_options_btrfs
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_btrfs
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_mount_options_btrfs
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_fs_mount_options_btrfs
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=osd_journal_size
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is
/sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdf
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is
/sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is
/sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120
on /dev/sdf
[cibn05][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk
--new=2:0:5120M --change-name=2:ceph journal
--partition-guid=2:6a9a83f1-2196-4833-a4c8-8f3a424de54f
--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdf
[cibn05][WARNIN] Could not create partition 2 from 10485761 to 10485760
[cibn05][WARNIN] Error encountered; not saving changes.
[cibn05][WARNIN] Traceback (most recent call last):
[cibn05][WARNIN]   File "/usr/sbin/ceph-disk", line 3576, in <module>
[cibn05][WARNIN]     main(sys.argv[1:])
[cibn05][WARNIN]   File "/usr/sbin/ceph-disk", line 3530, in main
[cibn05][WARNIN]     args.func(args)
[cibn05][WARNIN]   File "/usr/sbin/ceph-disk", line 1863, in main_prepare
[cibn05][WARNIN]     luks=luks
[cibn05][WARNIN]   File "/usr/sbin/ceph-disk", line 1465, in prepare_journal
[cibn05][WARNIN]     return prepare_journal_dev(data, journal,
journal_size, journal_uuid, journal_dm_keypath, cryptsetup_parameters, luks)
[cibn05][WARNIN]   File "/usr/sbin/ceph-disk", line 1419, in
prepare_journal_dev
[cibn05][WARNIN]     raise Error(e)
[cibn05][WARNIN] __main__.Error: Error: Command '['/sbin/sgdisk',
'--new=2:0:5120M', '--change-name=2:ceph journal',
'--partition-guid=2:6a9a83f1-2196-4833-a4c8-8f3a424de54f',
'--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--',
'/dev/sdf']' returned non-zero exit status 4
[cibn05][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare
--cluster ceph --fs-type btrfs -- /dev/sdf
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

any ideas?

Thanks in advance,


*German*

2015-11-19 10:46 GMT-03:00 David Riedl <david.ri...@wingcon.com>:

> Thanks again! It works now.
> But now I have another problem.
>
> The daemons are working now, even after a restart. But the OSDs won't talk
> to the rest of the cluster.
>
> osdmap e5058: 12 osds: 8 up, 8 in;
>
> The command
> # ceph osd in osd.1
> tells me
> marked in osd.1.
>
> # ceph status
> tells me
> 1/9 in osds are down
> but that disappears after a while.
>
> Right now I have mixed OSDs from infernalis and the latest hammer release.
> (that may be the most crucial information and even the cause of the
> problem), but I am not sure.
>
> Sorry to bother you but this is my second day with these problems and it's
> nerve wrecking.
>
> Regards
>
> David
>
>
> On 19.11.2015 14:29, Mykola Dvornik wrote:
>
> I am also using centos7.x. /usr/lib/udev/rules.d/ should be fine. If not,
> one can always symlink to /etc/udev/rules.d/.
>
> On 19 November 2015 at 14:13, David Riedl <david.ri...@wingcon.com> wrote:
>
>> Thanks for the fix!
>> Two questions though:
>> Is that the right place for the udev rule? I have CentOS 7. The folder
>> exists, but all the other udev rules are in /usr/lib/udev/rules.d/.
>> Can I just create a new file named "89-ceph-journal.rules"  in the
>> /usr/lib/udev/rules.d/ folder?
>>
>>
>> Regards
>>
>> David
>>
>>
>> On 19.11.2015 14:02, Mykola Dvornik wrote:
>>
>> cat /etc/udev/rules.d/89-ceph-journal.rules
>>
>> KERNEL=="sdd?" SUBSYSTEM=="block" OWNER="ceph" GROUP="disk" MODE="0660"
>>
>> On 19 November 2015 at 13:54, Mykola < <mykola.dvor...@gmail.com>
>> mykola.dvor...@gmail.com> wrote:
>>
>>> I am afraid one would need an udev rule to make it persistent.
>>>
>>>
>>>
>>> Sent from Outlook Mail <http://go.microsoft.com/fwlink/?LinkId=550987>
>>> for Windows 10 phone
>>>
>>>
>>>
>>>
>>> *From: *David Riedl <david.ri...@wingcon.com>
>>> *Sent: *Thursday, November 19, 2015 1:42 PM
>>> *To: *ceph-us...@ceph.com
>>> *Subject: *Re: [ceph-users] Can't activate osd in infernalis
>>>
>>>
>>>
>>> I fixed the issue and opened a ticket on the ceph-deploy bug tracker
>>>
>>> <http://tracker.ceph.com/issues/13833>
>>> http://tracker.ceph.com/issues/13833
>>>
>>>
>>>
>>> tl;dr:
>>>
>>> change permission of the ssd journal partition with
>>>
>>> chown ceph:ceph /dev/sdd1
>>>
>>>
>>>
>>> On 19.11.2015 11:38, David Riedl wrote:
>>>
>>> > Hi everyone.
>>>
>>> > I updated one of my hammer osd nodes to infernalis today.
>>>
>>> > After many problems with the upgrading process of the running OSDs, I
>>>
>>> > decided to wipe them and start anew.
>>>
>>> > I reinstalled all packages and deleted all partitions on the OSDs and
>>>
>>> > the SSD journal drive.
>>>
>>> > I zapped the disks with ceph-deploy and also prepared them with
>>>
>>> > ceph-deploy.
>>>
>>> > Selinux state is enabled (disabling it didn't help though).
>>>
>>> >
>>>
>>> > After executing "ceph-deploy osd activate ceph01:/dev/sda1:/dev/sdd1"
>>>
>>> > I get the following error message from ceph-deploy:
>>>
>>> >
>>>
>>> >
>>>
>>> > [ceph01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph
>>>
>>> > --cluster ceph --name client.bootstrap-osd --keyring
>>>
>>> > /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
>>>
>>> > /var/lib/ceph/tmp/mnt.pmHRuu/activate.monmap
>>>
>>> > [ceph01][WARNIN] 2015-11-19 11:22:53.974765 7f1a06852700  0 --
>>>
>>> > :/3225863658 >> 10.20.60.10:6789/0 pipe(0x7f19f8062590 sd=4 :0 s=1
>>>
>>> > pgs=0 cs=0 l=1 c=0x7f19f805c1b0).fault
>>>
>>> > [ceph01][WARNIN] got monmap epoch 16
>>>
>>> > [ceph01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
>>>
>>> > --cluster ceph --mkfs --mkkey -i 0 --monmap
>>>
>>> > /var/lib/ceph/tmp/mnt.pmHRuu/activate.monmap --osd-data
>>>
>>> > /var/lib/ceph/tmp/mnt.pmHRuu --osd-journal
>>>
>>> > /var/lib/ceph/tmp/mnt.pmHRuu/journal --osd-uuid
>>>
>>> > de162e24-16b6-4796-b6b9-774fdb8ec234 --keyring
>>>
>>> > /var/lib/ceph/tmp/mnt.pmHRuu/keyring --setuser ceph --setgroup ceph
>>>
>>> > [ceph01][WARNIN] 2015-11-19 11:22:57.237096 7fb458bb7900 -1
>>>
>>> > filestore(/var/lib/ceph/tmp/mnt.pmHRuu) mkjournal error creating
>>>
>>> > journal on /var/lib/ceph/tmp/mnt.pmHRuu/journal: (13) Permission denied
>>>
>>> > [ceph01][WARNIN] 2015-11-19 11:22:57.237118 7fb458bb7900 -1 OSD::mkfs:
>>>
>>> > ObjectStore::mkfs failed with error -13
>>>
>>> > [ceph01][WARNIN] 2015-11-19 11:22:57.237157 7fb458bb7900 -1  ** ERROR:
>>>
>>> > error creating empty object store in /var/lib/ceph/tmp/mnt.pmHRuu:
>>>
>>> > (13) Permission denied
>>>
>>> > [ceph01][WARNIN] ERROR:ceph-disk:Failed to activate
>>>
>>> > [ceph01][WARNIN] DEBUG:ceph-disk:Unmounting
>>> /var/lib/ceph/tmp/mnt.pmHRuu
>>>
>>> > [ceph01][WARNIN] INFO:ceph-disk:Running command: /bin/umount --
>>>
>>> > /var/lib/ceph/tmp/mnt.pmHRuu
>>>
>>> > [ceph01][WARNIN] Traceback (most recent call last):
>>>
>>> > [ceph01][WARNIN]   File "/usr/sbin/ceph-disk", line 3576, in <module>
>>>
>>> > [ceph01][WARNIN]     main(sys.argv[1:])
>>>
>>> > [ceph01][WARNIN]   File "/usr/sbin/ceph-disk", line 3530, in main
>>>
>>> > [ceph01][WARNIN]     args.func(args)
>>>
>>> > [ceph01][WARNIN]   File "/usr/sbin/ceph-disk", line 2424, in
>>>
>>> > main_activate
>>>
>>> > [ceph01][WARNIN]     dmcrypt_key_dir=args.dmcrypt_key_dir,
>>>
>>> > [ceph01][WARNIN]   File "/usr/sbin/ceph-disk", line 2197, in
>>>
>>> > mount_activate
>>>
>>> > [ceph01][WARNIN]     (osd_id, cluster) = activate(path,
>>>
>>> > activate_key_template, init)
>>>
>>> > [ceph01][WARNIN]   File "/usr/sbin/ceph-disk", line 2360, in activate
>>>
>>> > [ceph01][WARNIN]     keyring=keyring,
>>>
>>> > [ceph01][WARNIN]   File "/usr/sbin/ceph-disk", line 1950, in mkfs
>>>
>>> > [ceph01][WARNIN]     '--setgroup', get_ceph_user(),
>>>
>>> > [ceph01][WARNIN]   File "/usr/sbin/ceph-disk", line 349, in
>>>
>>> > command_check_call
>>>
>>> > [ceph01][WARNIN]     return subprocess.check_call(arguments)
>>>
>>> > [ceph01][WARNIN]   File "/usr/lib64/python2.7/subprocess.py", line
>>>
>>> > 542, in check_call
>>>
>>> > [ceph01][WARNIN]     raise CalledProcessError(retcode, cmd)
>>>
>>> > [ceph01][WARNIN] subprocess.CalledProcessError: Command
>>>
>>> > '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i',
>>>
>>> > '0', '--monmap', '/var/lib/ceph/tmp/mnt.pmHRuu/activate.monmap',
>>>
>>> > '--osd-data', '/var/lib/ceph/tmp/mnt.pmHRuu', '--osd-journal',
>>>
>>> > '/var/lib/ceph/tmp/mnt.pmHRuu/journal', '--osd-uuid',
>>>
>>> > 'de162e24-16b6-4796-b6b9-774fdb8ec234', '--keyring',
>>>
>>> > '/var/lib/ceph/tmp/mnt.pmHRuu/keyring', '--setuser', 'ceph',
>>>
>>> > '--setgroup', 'ceph']' returned non-zero exit status 1
>>>
>>> > [ceph01][ERROR ] RuntimeError: command returned non-zero exit status: 1
>>>
>>> > [ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
>>>
>>> > ceph-disk -v activate --mark-init systemd --mount /dev/sda1
>>>
>>> >
>>>
>>> > The output of ls -lahn in /var/lib/ceph/ is
>>>
>>> >
>>>
>>> > drwxr-x---.  9 167 167 4,0K 19. Nov 10:32 .
>>>
>>> > drwxr-xr-x. 28   0   0 4,0K 19. Nov 11:14 ..
>>>
>>> > drwxr-x---.  2 167 167    6 10. Nov 13:06 bootstrap-mds
>>>
>>> > drwxr-x---.  2 167 167   25 19. Nov 10:48 bootstrap-osd
>>>
>>> > drwxr-x---.  2 167 167    6 10. Nov 13:06 bootstrap-rgw
>>>
>>> > drwxr-x---.  2 167 167    6 10. Nov 13:06 mds
>>>
>>> > drwxr-x---.  2 167 167    6 10. Nov 13:06 mon
>>>
>>> > drwxr-x---.  2 167 167    6 10. Nov 13:06 osd
>>>
>>> > drwxr-x---.  2 167 167   65 19. Nov 11:22 tmp
>>>
>>> >
>>>
>>> >
>>>
>>> > I hope someone can help me, I am really lost right now.
>>>
>>> >
>>>
>>>
>>>
>>> --
>>>
>>> Mit freundlichen Grüßen
>>>
>>>
>>>
>>> David Riedl
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> WINGcon GmbH Wireless New Generation - Consulting & Solutions
>>>
>>>
>>>
>>> Phone: +49 (0) 7543 9661 - 26 <%2B49%20%280%29%207543%209661%20-%2026>
>>>
>>> E-Mail: <david.ri...@wingcon.com>david.ri...@wingcon.com
>>>
>>> Web: <http://www.wingcon.com>http://www.wingcon.com
>>>
>>>
>>>
>>> Sitz der Gesellschaft: Langenargen
>>>
>>> Registergericht: ULM, HRB 632019
>>>
>>> USt-Id.: DE232931635, WEEE-Id.: DE74015979
>>>
>>> Geschäftsführer: Norbert Schäfer, Fritz R. Paul
>>>
>>>
>>>
>>> _______________________________________________
>>>
>>> ceph-users mailing list
>>>
>>> <ceph-users@lists.ceph.com>ceph-users@lists.ceph.com
>>>
>>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>
>>
>>
>> --
>>  Mykola
>>
>>
>> --
>> Mit freundlichen Grüßen
>>
>> David Riedl
>>
>>
>>
>> WINGcon GmbH Wireless New Generation - Consulting & Solutions
>>
>> Phone: +49 (0) 7543 9661 - 26
>> E-Mail: david.ri...@wingcon.com
>> Web: http://www.wingcon.com
>>
>> Sitz der Gesellschaft: Langenargen
>> Registergericht: ULM, HRB 632019
>> USt-Id.: DE232931635, WEEE-Id.: DE74015979
>> Geschäftsführer: Norbert Schäfer, Fritz R. Paul
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
>  Mykola
>
>
> --
> Mit freundlichen Grüßen
>
> David Riedl
>
>
>
> WINGcon GmbH Wireless New Generation - Consulting & Solutions
>
> Phone: +49 (0) 7543 9661 - 26
> E-Mail: david.ri...@wingcon.com
> Web: http://www.wingcon.com
>
> Sitz der Gesellschaft: Langenargen
> Registergericht: ULM, HRB 632019
> USt-Id.: DE232931635, WEEE-Id.: DE74015979
> Geschäftsführer: Norbert Schäfer, Fritz R. Paul
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to