Ok, I understand.

And the same configuration has worked on your NVMe servers? If yes it’s 
strange, but I think that Ceph developers can tell you why better than me for 
this part :-)

Regards,
___________________________________________________________________
PSA Groupe
Loïc Devulder (loic.devul...@mpsa.com<mailto:loic.devul...@mpsa.com>)
Senior Linux System Engineer / Linux HPC Specialist
___________________________________________________________________

De : sandeep.cool...@gmail.com<mailto:sandeep.cool...@gmail.com> 
[mailto:sandeep.cool...@gmail.com]
Envoyé : vendredi 16 décembre 2016 09:45
À : LOIC DEVULDER - U329683 
<loic.devul...@mpsa.com<mailto:loic.devul...@mpsa.com>>
Objet : Re: [ceph-users] 2 OSD's per drive , unable to start the osd's

Hi Loic,

i want that kind of setup  for the NVMe SSD drives, these drives gives better 
performance when you use more than one OSD per NVMe SSD drive.

I was just testing the setup on virtual machines, because i installed the same 
setup on my server having NVMe SSD's.

Thanks,
Sandeep

On Fri, Dec 16, 2016 at 2:07 PM, LOIC DEVULDER 
<loic.devul...@mpsa.com<mailto:loic.devul...@mpsa.com>> wrote:
Hi,

I’m not sure that having multiple OSD on one drive is supported.
And also: why do you want this? It’s not good for perfomance and more important 
for data redundancy.

Regards,
___________________________________________________________________
PSA Groupe
Loïc Devulder (loic.devul...@mpsa.com<mailto:loic.devul...@mpsa.com>)
Senior Linux System Engineer / Linux HPC Specialist
___________________________________________________________________

De : ceph-users 
[mailto:ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>]
 De la part de sandeep.cool...@gmail.com<mailto:sandeep.cool...@gmail.com>
Envoyé : vendredi 16 décembre 2016 09:23
À : ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Objet : [ceph-users] 2 OSD's per drive , unable to start the osd's

Hi,

I was trying the scenario where i have partitioned my drive (/dev/sdb) into 4 
(sdb1, sdb2 , sdb3, sdb4) using the sgdisk utility:

# sgdisk -z /dev/sdb
# sgdisk -n 1:0:+1024 /dev/sdb -c 1:"ceph journal"
# sgdisk -n 1:0:+1024 /dev/sdb -c 2:"ceph journal"
# sgdisk -n 1:0:+4096 /dev/sdb -c 3:"ceph data"
# sgdisk -n 1:0:+4096 /dev/sdb -c 3:"ceph data"

checked the partition with lsblk and it has created the partitions as expected.

im using the ceph-disk command to create the osd's:
# ceph-disk prepare --cluster ceph /dev/sdb3 /dev/sdb1
prepare_device: OSD will not be hot-swappable if journal is not the same device 
as the osd data
prepare_device: Journal /dev/sdb1 was not prepared with ceph-disk. Symlinking 
directly.
set_data_partition: incorrect partition UUID: 
0fc63daf-8483-4772-8e79-3d69d8477de4, expected 
['4fbd7e29-9d25-41b8-afd0-5ec00ceff05d', 
'4fbd7e29-9d25-41b8-afd0-062c0ceff05d', '4fbd7e29-8ae0-4982-bf9d-5a8d867af560', 
'4fbd7e29-9d25-41b8-afd0-35865ceff05d']
meta-data=/dev/sdb3              isize=2048   agcount=4, agsize=261760 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=1047040, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=65536  ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


# chown ceph:ceph /dev/sdb*

# ceph-disk activate /dev/sdb3
got monmap epoch 1
added key for osd.2
Created symlink from 
/etc/systemd/system/ceph-osd.target.wants/ceph-osd@2.service<mailto:/etc/systemd/system/ceph-osd.target.wants/ceph-osd@2.service>
 to 
/usr/lib/systemd/system/ceph-osd@.service<mailto:/usr/lib/systemd/system/ceph-osd@.service>.
# lsblk
sdb                   8:16   0   10G  0 disk
├─sdb1                8:17   0    1G  0 part
├─sdb2                8:18   0    1G  0 part
├─sdb3                8:19   0    4G  0 part /var/lib/ceph/osd/ceph-2
└─sdb4                8:20   0    4G  0 part

# systemctl status ceph-osd@2.service<mailto:ceph-osd@2.service>
● ceph-osd@2.service<mailto:ceph-osd@2.service> - Ceph object storage daemon
   Loaded: loaded 
(/usr/lib/systemd/system/ceph-osd@.service<mailto:/usr/lib/systemd/system/ceph-osd@.service>;
 enabled; vendor preset: disabled)
   Active: active (running) since Fri 2016-12-16 13:44:44 IST; 1min 38s ago
  Process: 4599 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster 
${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
 Main PID: 4650 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@2.service
           └─4650 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph -...

Dec 16 13:46:19 admin ceph-osd[4650]: 2016-12-16 13:46:19.816627 7f1a7cd127...4)
Dec 16 13:46:19 admin ceph-osd[4650]: 2016-12-16 13:46:19.816652 7f1a7cd127...4)
Dec 16 13:46:20 admin ceph-osd[4650]: 2016-12-16 13:46:20.534610 7f1a54b2c7...9)
Dec 16 13:46:20 admin ceph-osd[4650]: 2016-12-16 13:46:20.534638 7f1a54b2c7...9)
Dec 16 13:46:20 admin ceph-osd[4650]: 2016-12-16 13:46:20.816934 7f1a7cd127...1)
Dec 16 13:46:20 admin ceph-osd[4650]: 2016-12-16 13:46:20.816979 7f1a7cd127...1)
Dec 16 13:46:21 admin ceph-osd[4650]: 2016-12-16 13:46:21.817323 7f1a7cd127...8)
Dec 16 13:46:21 admin ceph-osd[4650]: 2016-12-16 13:46:21.817436 7f1a7cd127...8)
Dec 16 13:46:22 admin ceph-osd[4650]: 2016-12-16 13:46:22.826281 7f1a7cd127...7)
Dec 16 13:46:22 admin ceph-osd[4650]: 2016-12-16 13:46:22.826334 7f1a7cd127...7)
Hint: Some lines were ellipsized, use -l to show in full.
but when i reboot the node , the osd doesn't comes up automatically!!

# lsblk
sdb                   8:16   0   10G  0 disk
├─sdb1                8:17   0    1G  0 part
├─sdb2                8:18   0    1G  0 part
├─sdb3                8:19   0    4G  0 part
└─sdb4                8:20   0    4G  0 part

]# systemctl status ceph-osd@2.service<mailto:ceph-osd@2.service>
● ceph-osd@2.service<mailto:ceph-osd@2.service> - Ceph object storage daemon
   Loaded: loaded 
(/usr/lib/systemd/system/ceph-osd@.service<mailto:/usr/lib/systemd/system/ceph-osd@.service>;
 enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Fri 2016-12-16 13:48:52 IST; 2min 
6s ago
  Process: 2491 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id %i 
--setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
  Process: 2446 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster 
${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
 Main PID: 2491 (code=exited, status=1/FAILURE)

Dec 16 13:48:52 admin systemd[1]: 
ceph-osd@2.service<mailto:ceph-osd@2.service>: main process exited, 
code=exited, status=1/FAILURE
Dec 16 13:48:52 admin systemd[1]: Unit 
ceph-osd@2.service<mailto:ceph-osd@2.service> entered failed state.
Dec 16 13:48:52 admin systemd[1]: ceph-osd@2.service<mailto:ceph-osd@2.service> 
failed.
Dec 16 13:48:52 admin systemd[1]: ceph-osd@2.service<mailto:ceph-osd@2.service> 
holdoff time over, scheduling restart.
Dec 16 13:48:52 admin systemd[1]: start request repeated too quickly for 
ceph-osd@2.service<mailto:ceph-osd@2.service>
Dec 16 13:48:52 admin systemd[1]: Failed to start Ceph object storage daemon.
Dec 16 13:48:52 admin systemd[1]: Unit 
ceph-osd@2.service<mailto:ceph-osd@2.service> entered failed state.
Dec 16 13:48:52 admin systemd[1]: ceph-osd@2.service<mailto:ceph-osd@2.service> 
failed.

# systemctl start ceph-osd@2.service<mailto:ceph-osd@2.service>
Job for ceph-osd@2.service<mailto:ceph-osd@2.service> failed because start of 
the service was attempted too often. See "systemctl status 
ceph-osd@2.service<mailto:ceph-osd@2.service>" and "journalctl -xe" for details.
To force a start use "systemctl reset-failed 
ceph-osd@2.service<mailto:ceph-osd@2.service>" followed by "systemctl start 
ceph-osd@2.service<mailto:ceph-osd@2.service>" again.
But if i do it with the single osd per drive, it works fine....
Anyone else faced the same issue anytime??
Thanks,
Sandeep





_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to