Pls can somebody help?  Im  getting this error.
ceph@CephAdmin:~$ ceph-deploy osd create 
server1:sda:/dev/sdj1[ceph_deploy.cli][INFO  ] Invoked (1.3): 
/usr/bin/ceph-deploy osd create server1:sda:/dev/sdj1[ceph_deploy.osd][DEBUG ] 
Preparing cluster ceph disks server1:/dev/sda:/dev/sdj1[server1][DEBUG ] 
connected to host: server1[server1][DEBUG ] detect platform information from 
remote host[server1][DEBUG ] detect machine type[ceph_deploy.osd][INFO  ] 
Distro info: Ubuntu 12.04 precise[ceph_deploy.osd][DEBUG ] Deploying osd to 
server1[server1][DEBUG ] write cluster configuration to 
/etc/ceph/{cluster}.conf[server1][INFO  ] Running command: sudo udevadm trigger 
--subsystem-match=block --action=add[ceph_deploy.osd][DEBUG ] Preparing host 
server1 disk /dev/sda journal /dev/sdj1 activate True[server1][INFO  ] Running 
command: sudo ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sda 
/dev/sdj1[server1][ERROR ] WARNING:ceph-disk:OSD will not be hot-swappable if 
journal is not the same device as the osd data[server1][ERROR ] Could not 
create partition 1 from 34 to 2047[server1][ERROR ] Error encountered; not 
saving changes.[server1][ERROR ] ceph-disk: Error: Command '['sgdisk', 
'--largest-new=1', '--change-name=1:ceph data', 
'--partition-guid=1:d3ca8a92-7ba5-412e-abf5-06af958b788d', 
'--typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be', '--', '/dev/sda']' 
returned non-zero exit status 4[server1][ERROR ] Traceback (most recent call 
last):[server1][ERROR ]   File 
"/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/process.py", line 68, 
in run[server1][ERROR ]     reporting(conn, result, timeout)[server1][ERROR ]   
File "/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/log.py", line 13, 
in reporting[server1][ERROR ]     received = 
result.receive(timeout)[server1][ERROR ]   File 
"/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py",
 line 455, in receive[server1][ERROR ]     raise self._getremoteerror() or 
EOFError()[server1][ERROR ] RemoteError: Traceback (most recent call 
last):[server1][ERROR ]   File "<string>", line 806, in 
executetask[server1][ERROR ]   File "", line 35, in _remote_run[server1][ERROR 
] RuntimeError: command returned non-zero exit status: 1[server1][ERROR 
][server1][ERROR ][ceph_deploy.osd][ERROR ] Failed to execute command: 
ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sda 
/dev/sdj1[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs





> Date: Thu, 31 Oct 2013 10:55:56 +0000
> From: joao.l...@inktank.com
> To: charlesboy...@hotmail.com; ceph-de...@vger.kernel.org
> Subject: Re: testing ceph
> 
> On 10/31/2013 04:54 AM, charles L wrote:
> > Hi,
> > Pls is this a good setup for a production environment test of ceph? My 
> > focus is on the SSD ... should it be partitioned(sdf1,2 ,3,4) and shared by 
> > the four OSDs on a host? or is this a better configuration for the SSD to 
> > be just one partition(sdf1) while all osd uses that one partition?
> > my setup:
> > - 6 Servers with one 250gb boot disk for OS(sda),
> > four-2Tb Disks each for the OSDs i.e Total disks = 6x4 = 24 disks (sdb -sde)
> > and one-60GB SSD for Osd Journal(sdf).
> > -RAM = 32GB on each server with 2 GB network link.
> > hostname for servers: Server1 -Server6
> 
> Charles,
> 
> What you are describing on the ceph.conf below is definitely not a good 
> idea.  If you really want to use just one SSD and share it across 
> multiple OSDs, then you have two possible approaches:
> 
> - partition that disk and assign a *different* partition to each OSD; or
> - keep only one partition, format it with some filesystem, and assign a 
> *different* journal file within that fs to each OSD.
> 
> What you are describing has you using the same partition for all OSDs. 
> This will likely create issues due to multiple OSDs writing and reading 
> from a single journal.  TBH I'm not familiar enough with the journal 
> mechanism to know whether the OSDs will detect that situation.
> 
>    -Joao
> 
> >
> > [osd.0]
> >   host = server1
> > devs = /dev/sdb
> > osd journal = /dev/sdf1
> > [osd.1]
> > host = server1
> > devs = /dev/sdc
> > osd journal = /dev/sdf2
> >
> > [osd.3]
> > host = server1
> > devs = /dev/sdd
> > osd journal = /dev/sdf2
> >
> > [osd.4]
> > host = server1
> > devs = /dev/sde
> > osd journal = /dev/sdf2
> > [osd.5]
> > host = server2
> > devs = /dev/sdb
> > osd journal = /dev/sdf2
> > ...
> > [osd.23]
> > host = server6
> > devs = /dev/sde
> > osd journal = /dev/sdf2
> >
> > Thanks.                                       --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majord...@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> 
> 
> -- 
> Joao Eduardo Luis
> Software Engineer | http://inktank.com | http://ceph.com
                                          
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to