Hi Wido,
  The disk was empty, I checked that there were no remapped pgs, before run
ceph-disk prepare. Re-run ceph-disk again?

Regards, i

El lun., 20 nov. 2017 14:12, Wido den Hollander <w...@42on.com> escribió:

>
> > Op 20 november 2017 om 14:02 schreef Iban Cabrillo <
> cabri...@ifca.unican.es>:
> >
> >
> > Hi cephers,
> >   I was trying to migrate from Filestore to bluestore followig the
> > instructions but after the ceph-disk prepare the new osd had not join to
> > the cluster again:
> >
> >    [root@cephadm ~]# ceph osd tree
> > ID CLASS WEIGHT   TYPE NAME                STATUS    REWEIGHT PRI-AFF
> > -1       58.21509 root default
> > -7       58.21509     datacenter 10GbpsNet
> > -2       29.12000         host cephosd01
> >  1   hdd  3.64000             osd.1               up  1.00000 1.00000
> >  3   hdd  3.64000             osd.3               up  1.00000 1.00000
> >  5   hdd  3.64000             osd.5               up  1.00000 1.00000
> >  7   hdd  3.64000             osd.7               up  1.00000 1.00000
> >  9   hdd  3.64000             osd.9               up  1.00000 1.00000
> > 11   hdd  3.64000             osd.11              up  1.00000 1.00000
> > 13   hdd  3.64000             osd.13              up  1.00000 1.00000
> > 15   hdd  3.64000             osd.15              up  1.00000 1.00000
> > -3       29.09509         host cephosd02
> >  0   hdd  3.63689             osd.0        destroyed        0 1.00000
> >  2   hdd  3.63689             osd.2               up  1.00000 1.00000
> >  4   hdd  3.63689             osd.4               up  1.00000 1.00000
> >  6   hdd  3.63689             osd.6               up  1.00000 1.00000
> >  8   hdd  3.63689             osd.8               up  1.00000 1.00000
> > 10   hdd  3.63689             osd.10              up  1.00000 1.00000
> > 12   hdd  3.63689             osd.12              up  1.00000 1.00000
> > 14   hdd  3.63689             osd.14              up  1.00000 1.00000
> > -8              0     datacenter 1GbpsNet
> >
> >
> > The state is destroyed yet
> >
> > The operation has completed successfully.
> > [root@cephosd02 ~]# ceph-disk prepare --bluestore /dev/sda --osd-id 0
> > The operation has completed successfully.
>
> Did you wipe the disk yet? Make sure it's completely empty before you
> re-create the OSD.
>
> Wido
>
> > The operation has completed successfully.
> > The operation has completed successfully.
> > meta-data=/dev/sda1              isize=2048   agcount=4, agsize=6400 blks
> >          =                       sectsz=512   attr=2, projid32bit=1
> >          =                       crc=1        finobt=0, sparse=0
> > data     =                       bsize=4096   blocks=25600, imaxpct=25
> >          =                       sunit=0      swidth=0 blks
> > naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
> > log      =internal log           bsize=4096   blocks=864, version=2
> >          =                       sectsz=512   sunit=0 blks, lazy-count=1
> > realtime =none                   extsz=4096   blocks=0, rtextents=0
> >
> > The metadata was on SSD disk
> >
> > In the logs I only see this :
> >
> > 2017-11-20 14:00:48.536252 7fc2d149dd00 -1  ** ERROR: unable to open OSD
> > superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory
> > 2017-11-20 14:01:08.788158 7f4a9165fd00  0 set uid:gid to 167:167
> > (ceph:ceph)
> > 2017-11-20 14:01:08.788179 7f4a9165fd00  0 ceph version 12.2.0
> > (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc), process
> > (unknown), pid 115029
> >
> > Any Advise?
> >
> > Regards, I
> >
> > --
> >
> ############################################################################
> > Iban Cabrillo Bartolome
> > Instituto de Fisica de Cantabria (IFCA)
> > Santander, Spain
> > Tel: +34942200969
> > PGP PUBLIC KEY:
> > http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
> >
> ############################################################################
> > Bertrand Russell:*"El problema con el mundo es que los estúpidos están
> > seguros de todo y los inteligentes están **llenos de dudas*"
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to