Thanks, that did the trick! I think there's some puzzling things that
change depending on timing of commands during setup, and at some point I
noticed that the script output said "Installing stable release Emperor" or
the equivalent, so possibly I have no idea what my own commands are doing.
But, for posterity, the following script builds a working dual-OSD cluster
for me.

#!/bin/sh

set -x

sudo stop ceph-all
ceph-deploy uninstall usrv1
sudo rm -rf /var/lib/ceph/osd/ceph-*/*
ceph-deploy purgedata usrv1
ceph-deploy forgetkeys

rm -rf ~/ceph
mkdir ~/ceph
cd ~/ceph
ceph-deploy new usrv1

perl -nli -e 'print unless /^$/ or /omap/' ceph.conf
cat >>ceph.conf <<EOF
public network = 192.168.251.1/24
osd crush chooseleaf type = 0
osd pool default size = 2
EOF

set -e

ceph-deploy install usrv1
sudo mkdir /var/lib/ceph/osd/ceph-0
sudo mkdir /var/lib/ceph/osd/ceph-1
sudo mount /var/lib/ceph/osd/ceph-0
sudo mount /var/lib/ceph/osd/ceph-1
ceph-deploy mon create-initial
ceph-deploy osd prepare usrv1:/var/lib/ceph/osd/ceph-0
ceph-deploy osd prepare usrv1:/var/lib/ceph/osd/ceph-1
ceph-deploy admin usrv1
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
ceph osd crush tunables optimal
ceph-deploy osd activate usrv1:/var/lib/ceph/osd/ceph-0
ceph-deploy osd activate usrv1:/var/lib/ceph/osd/ceph-1
ceph -w

m.


On Mon, Jun 9, 2014 at 10:04 AM, John Wilkins <john.wilk...@inktank.com>
wrote:

> Miki,
>
> osd crush chooseleaf type is set to 1 by default, which means that it
> looks to peer with placement groups on another node, not the same node. You
> would need to set that to 0 for a 1-node cluster.
>
> John
>
>
> On Sun, Jun 8, 2014 at 10:40 PM, Miki Habryn <dic...@rcpt.to> wrote:
>
>> I set up a single-node, dual-osd cluster following the Quick Start on
>> ceph.com with Firefly packages, adding "osd pool default size = 2".
>> All of the pgs came up in active+remapped or active+degraded status. I
>> read up on tunables and set them to optimal, to no result, so I added
>> a third osd instead. About 39 pgs moved to active status, but the rest
>> stayed in active+remapped or active+degraded. When I raised the
>> replication level to 3 with "ceph osd pool set ... size 3", all the
>> pgs went back to degraded or remapped. Just for kicks, I tried to set
>> the replication level to 1, and I still only got 39 pgs active. Is
>> there something obvious I'm doing wrong?
>>
>> m.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilk...@inktank.com
> (415) 425-9599
> http://inktank.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to