Re: [ceph-users] testing ceph

2013-11-04 Thread charles L
er1][ERROR ][server1][ERROR ][ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sda /dev/sdj1[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs > Date: Thu, 31 Oct 2013 10:55:56 + > From: joao.l...@inktank.com >

Re: [ceph-users] testing ceph

2013-11-01 Thread charles L
d some more clarification with your setup , Did you mean > > 1) There is 1 SSD ( 60 GB ) on each server i.e 6 SSD on all 6 servers ? > > 2) your osd.3 , osd.4 , osd.5 uses same journal ( /dev/sdf2 ) ? > > Regards > Karan Singh > >

Re: [ceph-users] testing ceph

2013-11-01 Thread charles L
d some more clarification with your setup , Did you mean > > 1) There is 1 SSD ( 60 GB ) on each server i.e 6 SSD on all 6 servers ? > > 2) your osd.3 , osd.4 , osd.5 uses same journal ( /dev/sdf2 ) ? > > Regards > Karan Singh > >

[ceph-users] testing ceph

2013-10-31 Thread charles L
Hi,Pls is this a good setup for a production environment test of ceph? My focus is on the SSD ... should it be partitioned(sdf1,2 ,3,4) and shared by the four OSDs on a host? or is this a better configuration for the SSD to be just one partition(sdf1) while all osd uses that one partition?my set

[ceph-users] errno=Connection refused

2013-03-26 Thread charles L
Hi,  Can somebody help? git clone --recursive https://github.com/ceph/ceph.git Cloning into 'ceph'... remote: Counting objects: 179817, done. remote: Compressing objects: 100% (34122/34122), done. remote: Total 179817 (delta 146077), reused 177377 (delta 143959) Receiving objects: 100% (179817/17

[ceph-users] debug_osd on/off on an active ceph cluster

2013-03-07 Thread charles L
Pls how do i turn on and turn off debug log for osd on a working ceph cluster. I don't want to restart the machines. Is this command ok? ceph-osd -i o --debug_ms 5 --debug_osd 5 --debug_filestore 5 --debug_journal 5 -d ??? Thanks. _