>>>It looks like at some point the filesystem is not passed to the options. 
>>>Would >>>you mind running the `ceph-disk-prepare` command again but with
>>>the --verbose flag?
>>>I think that from the output above (correct it if I am mistaken) that would 
>>>be >>>something like:

>>>ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1

Hi.
If I'm running:
ceph-deploy disk zap ceph001:sdaa ceph001:sda1
and
ceph-disk -v prepare /dev/sdaa /dev/sda1, get the same errors:
======================================================
root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1
DEBUG:ceph-disk:Journal /dev/sda1 is a partition
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same 
device as the osd data
DEBUG:ceph-disk:Creating osd partition on /dev/sdaa
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1
meta-data=/dev/sdaa1             isize=2048   agcount=32, agsize=22892700 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=732566385, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=357698, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.EGTIq2 with 
options noatime
mount: /dev/sdaa1: more filesystems detected. This should not happen,
       use -t <type> to explicitly specify the filesystem type or
       use wipefs(8) to clean up the device.

mount: you must specify the filesystem type
ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', 
'--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.EGTIq2']' returned non-zero exit 
status 32

If executed this command separately for both disks - looks like ok:

For sdaa:

root@ceph001:~# ceph-disk -v prepare /dev/sdaa
INFO:ceph-disk:Will colocate journal with data on /dev/sdaa
DEBUG:ceph-disk:Creating journal partition num 2 size 1024 on /dev/sdaa
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Journal is GPT partition 
/dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f
DEBUG:ceph-disk:Creating osd partition on /dev/sdaa
Information: Moved requested sector from 2097153 to 2099200 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1
meta-data=/dev/sdaa1             isize=2048   agcount=32, agsize=22884508 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=732304241, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=357570, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.K3q9v5 with 
options noatime
DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.K3q9v5
DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.K3q9v5/journal -> 
/dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f
DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.K3q9v5
The operation has completed successfully.
DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdaa

For sda1:

root@ceph001:~# ceph-disk -v prepare /dev/sda1
DEBUG:ceph-disk:OSD data device /dev/sda1 is a partition
DEBUG:ceph-disk:Creating xfs fs on /dev/sda1
meta-data=/dev/sda1              isize=2048   agcount=4, agsize=655360 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=2621440, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.G30zPD with options 
noatime
DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.G30zPD
DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.G30zPD
DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sda1

From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Tuesday, August 13, 2013 11:14 PM
To: Pavel Timoschenkov
Cc: Samuel Just; ceph-us...@ceph.com
Subject: Re: [ceph-users] ceph-deploy and journal on separate disk



On Tue, Aug 13, 2013 at 3:21 AM, Pavel Timoschenkov 
<pa...@bayonetteas.onmicrosoft.com<mailto:pa...@bayonetteas.onmicrosoft.com>> 
wrote:
Hi.
Yes, i'm zapped all disks before.

More about my situation:
sdaa - one of disk for data: 3 TB with GPT partition table.
sda - ssd drive with manual created partitions (10 GB) for journal with MBR 
partition table.
===================================
fdisk -l /dev/sda

Disk /dev/sda: 480.1 GB, 480103981056 bytes
255 heads, 63 sectors/track, 58369 cylinders, total 937703088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00033624

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048    19531775     9764864   83  Linux
/dev/sda2        19531776    39061503     9764864   83  Linux
/dev/sda3        39061504    58593279     9765888   83  Linux
/dev/sda4        78125056    97656831     9765888   83  Linux

===================================

If i'm executed ceph-deploy osd prepare without "journal" options - it's ok:


ceph@ceph-admin:~$ ceph-deploy disk zap ceph001:sdaa ceph001:sda1
[ceph_deploy.osd][DEBUG ] zapping /dev/sdaa on ceph001
[ceph_deploy.osd][DEBUG ] zapping /dev/sda1 on ceph001

ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph001:/dev/sdaa:
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph001
[ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaa journal None 
activate False

root@ceph001:~# gdisk -l /dev/sdaa
GPT fdisk (gdisk) version 0.8.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdaa: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 575ACF17-756D-47EC-828B-2E0A0B8ED757
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 2048-sector boundaries
Total free space is 4061 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1         2099200      5860533134   2.7 TiB     FFFF  ceph data
   2            2048         2097152   1023.0 MiB  FFFF  ceph journal

Problems start, when i'm try create journal on separate drive:

ceph@ceph-admin:~$ ceph-deploy disk zap ceph001:sdaa ceph001:sda1
[ceph_deploy.osd][DEBUG ] zapping /dev/sdaa on ceph001
[ceph_deploy.osd][DEBUG ] zapping /dev/sda1 on ceph001

ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa:sda1
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
ceph001:/dev/sdaa:/dev/sda1
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph001
[ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaa journal 
/dev/sda1 activate False
[ceph_deploy.osd][ERROR ] ceph-disk-prepare -- /dev/sdaa /dev/sda1 returned 1
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
meta-data=/dev/sdaa1             isize=2048   agcount=32, agsize=22892700 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=732566385, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=357698, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same 
device as the osd data
mount: /dev/sdaa1: more filesystems detected. This should not happen,
       use -t <type> to explicitly specify the filesystem type or
       use wipefs(8) to clean up the device.

mount: you must specify the filesystem type
ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', 
'--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.fZQxiz']' returned non-zero exit 
status 32

ceph-deploy: Failed to create 1 OSDs

It looks like at some point the filesystem is not passed to the options. Would 
you mind running the `ceph-disk-prepare` command again but with
the --verbose flag?
I think that from the output above (correct it if I am mistaken) that would be 
something like:

ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1

And paste the results back so we can take a look?
-----Original Message-----
From: Samuel Just [mailto:sam.j...@inktank.com<mailto:sam.j...@inktank.com>]
Sent: Monday, August 12, 2013 11:39 PM
To: Pavel Timoschenkov
Cc: ceph-us...@ceph.com<mailto:ceph-us...@ceph.com>
Subject: Re: [ceph-users] ceph-deploy and journal on separate disk

Did you try using ceph-deploy disk zap ceph001:sdaa first?
-Sam

On Mon, Aug 12, 2013 at 6:21 AM, Pavel Timoschenkov 
<pa...@bayonetteas.onmicrosoft.com<mailto:pa...@bayonetteas.onmicrosoft.com>> 
wrote:
> Hi.
>
> I have some problems with create journal on separate disk, using
> ceph-deploy osd prepare command.
>
> When I try execute next command:
>
> ceph-deploy osd prepare ceph001:sdaa:sda1
>
> where:
>
> sdaa - disk for ceph data
>
> sda1 - partition on ssd drive for journal
>
> I get next errors:
>
> ======================================================================
> ==================================
>
> ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa:sda1
>
> ceph-disk-prepare -- /dev/sdaa /dev/sda1 returned 1
>
> Information: Moved requested sector from 34 to 2048 in
>
> order to align on 2048-sector boundaries.
>
> The operation has completed successfully.
>
> meta-data=/dev/sdaa1             isize=2048   agcount=32, agsize=22892700
> blks
>
>          =                       sectsz=512   attr=2, projid32bit=0
>
> data     =                       bsize=4096   blocks=732566385, imaxpct=5
>
>          =                       sunit=0      swidth=0 blks
>
> naming   =version 2              bsize=4096   ascii-ci=0
>
> log      =internal log           bsize=4096   blocks=357698, version=2
>
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
>
> realtime =none                   extsz=4096   blocks=0, rtextents=0
>
>
>
> WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the
> same device as the osd data
>
> mount: /dev/sdaa1: more filesystems detected. This should not happen,
>
>        use -t <type> to explicitly specify the filesystem type or
>
>        use wipefs(8) to clean up the device.
>
>
>
> mount: you must specify the filesystem type
>
> ceph-disk: Mounting filesystem failed: Command '['mount', '-o',
> 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.ek6mog']'
> returned non-zero exit status 32
>
>
>
> Someone had a similar problem?
>
> Thanks for the help
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to