Some more info about this... The subject should have been journal on another device. The issue also occurs if using another disk to hold the journal. If doing something like 'ceph-deploy node:sda:sdk' a subsequent run like 'ceph-deploy:sdb:sdk' will show the error regarding sdb's osd. If doing 'ceph-deply node:sda:sdk node:sdb:sdk node:sdc:sdk [...]' the first 2 osds will be created and launched fine, sdc's and any others won't.
Thanks. On Wed, Aug 7, 2013 at 10:55 AM, Joao Pedras <jpped...@gmail.com> wrote: > Hello Tren, > > It is indeed: > > $> sestatus > SELinux status: disabled > > Thanks, > > > On Wed, Aug 7, 2013 at 9:33 AM, Tren Blackburn <i...@theendoftime.net>wrote: > >> On Tue, Aug 6, 2013 at 11:14 AM, Joao Pedras <jpped...@gmail.com> wrote: >> >>> Greetings all. >>> >>> I am installing a test cluster using one ssd (/dev/sdg) to hold the >>> journals. Ceph's version is 0.61.7 and I am using ceph-deploy obtained from >>> ceph's git yesterday. This is on RHEL6.4, fresh install. >>> >>> When preparing the first 2 drives, sda and sdb, all goes well and the >>> journals get created in sdg1 and sdg2: >>> >>> $> ceph-deploy osd prepare ceph00:sda:sdg ceph00:sdb:sdg >>> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks >>> ceph00:/dev/sda:/dev/sdg ceph00:/dev/sdb:/dev/sdg >>> [ceph_deploy.osd][DEBUG ] Deploying osd to ceph00 >>> [ceph_deploy.osd][DEBUG ] Host ceph00 is now ready for osd use. >>> [ceph_deploy.osd][DEBUG ] Preparing host ceph00 disk /dev/sda journal >>> /dev/sdg activate False >>> [ceph_deploy.osd][DEBUG ] Preparing host ceph00 disk /dev/sdb journal >>> /dev/sdg activate False >>> >>> When preparing sdc or any disk after the first 2 I get the following in >>> that osd's log but no errors on ceph-deploy: >>> >>> # tail -f /var/log/ceph/ceph-osd.2.log >>> 2013-08-06 10:51:36.655053 7f5ba701a780 0 ceph version 0.61.7 >>> (8f010aff684e820ecc837c25ac77c7a05d7191ff), process ceph-osd, pid 11596 >>> 2013-08-06 10:51:36.658671 7f5ba701a780 1 >>> filestore(/var/lib/ceph/tmp/mnt.i2NK47) mkfs in /var/lib/ceph/tmp/mnt.i2NK47 >>> 2013-08-06 10:51:36.658697 7f5ba701a780 1 >>> filestore(/var/lib/ceph/tmp/mnt.i2NK47) mkfs fsid is already set to >>> 5d1beb09-1f80-421d-a88c-57789e2fc33e >>> 2013-08-06 10:51:36.813783 7f5ba701a780 1 >>> filestore(/var/lib/ceph/tmp/mnt.i2NK47) leveldb db exists/created >>> 2013-08-06 10:51:36.813964 7f5ba701a780 -1 journal FileJournal::_open: >>> disabling aio for non-block journal. Use journal_force_aio to force use of >>> aio anyway >>> 2013-08-06 10:51:36.813999 7f5ba701a780 1 journal _open >>> /var/lib/ceph/tmp/mnt.i2NK47/journal fd 10: 0 bytes, block size 4096 bytes, >>> directio = 1, aio = 0 >>> 2013-08-06 10:51:36.814035 7f5ba701a780 -1 journal check: ondisk fsid >>> 00000000-0000-0000-0000-000000000000 doesn't match expected >>> 5d1beb09-1f80-421d-a88c-57789e2fc33e, invalid (someone else's?) journal >>> 2013-08-06 10:51:36.814093 7f5ba701a780 -1 >>> filestore(/var/lib/ceph/tmp/mnt.i2NK47) mkjournal error creating journal on >>> /var/lib/ceph/tmp/mnt.i2NK47/journal: (22) Invalid argument >>> 2013-08-06 10:51:36.814125 7f5ba701a780 -1 OSD::mkfs: FileStore::mkfs >>> failed with error -22 >>> 2013-08-06 10:51:36.814185 7f5ba701a780 -1 ** ERROR: error creating >>> empty object store in /var/lib/ceph/tmp/mnt.i2NK47: (22) Invalid argument >>> >>> I have cleaned the disks with dd, zapped them and so forth but this >>> always occurs. If doing sdc/sdd first, for example, then sda or whatever >>> follows fails with similar errors. >>> >>> Does anyone have any insight on this issue? >>> >> >> Is SELinux disabled? >> >> t. >> >> > > > -- > Joao Pedras > -- Joao Pedras
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com