Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Alright, I was finally able to get this resolved without adding another node. As pointed out, even though I had a config variable that defined the default replicated size at 2, ceph for some reason created the default pools (data, and metadata) with a value of 3. After digging trough documentat

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Christian Balzer writes: > Read EVERYTHING you can find about crushmap rules. > > The quickstart (I think) talks about 3 storage nodes, not OSDs. > > Ceph is quite good when it comes to defining failure domains, the default > is to segregate at the storage node level. > What good is a replicati

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Christian Balzer
On Wed, 2 Jul 2014 14:25:49 + (UTC) Brian Lovett wrote: > Christian Balzer writes: > > > > So either make sure these pools really have a replication of 2 by > > deleting and re-creating them or add a third storage node. > > > > I just executed "ceph osd pool set {POOL} size 2" for both p

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Gregory Farnum writes: > > On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett > wrote: > > "profile": "bobtail", > > Okay. That's unusual. What's the oldest client you need to support, > and what Ceph version are you using? You probably want to set the > crush tunables to "optimal"; the "bobta

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Christian Balzer writes: > So either make sure these pools really have a replication of 2 by deleting > and re-creating them or add a third storage node. I just executed "ceph osd pool set {POOL} size 2" for both pools. Anything else I need to do? I still don't see any changes to the status

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Christian Balzer
Hello, Even though you did set the pool default size to 2 in your ceph configuration, I think this value (and others) is ignored in the initial setup, for the default pools. So either make sure these pools really have a replication of 2 by deleting and re-creating them or add a third storage nod

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum writes: > > On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett > wrote: > > "profile": "bobtail", > > Okay. That's unusual. What's the oldest client you need to support, > and what Ceph version are you using? This is a fresh install (as of today) running the latest firefly.

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Gregory Farnum
On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett wrote: > "profile": "bobtail", Okay. That's unusual. What's the oldest client you need to support, and what Ceph version are you using? You probably want to set the crush tunables to "optimal"; the "bobtail" ones are going to have all kinds of is

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum writes: > So those disks are actually different sizes, in proportion to their > weights? It could be having an impact on this, although it *shouldn't* > be an issue. And your tree looks like it's correct, which leaves me > thinking that something is off about your crush rules. :/ >

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Gregory Farnum
On Tue, Jul 1, 2014 at 11:57 AM, Brian Lovett wrote: > Gregory Farnum writes: > >> ...and one more time, because apparently my brain's out to lunch today: >> >> ceph osd tree >> >> *sigh* >> > > haha, we all have those days. > > [root@monitor01 ceph]# ceph osd tree > # idweight type name

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum writes: > ...and one more time, because apparently my brain's out to lunch today: > > ceph osd tree > > *sigh* > haha, we all have those days. [root@monitor01 ceph]# ceph osd tree # idweight type name up/down reweight -1 14.48 root default -2 7.24

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Gregory Farnum
On Tue, Jul 1, 2014 at 11:45 AM, Gregory Farnum wrote: > On Tue, Jul 1, 2014 at 11:33 AM, Brian Lovett > wrote: >> Brian Lovett writes: >> >> >> I restarted all of the osd's and noticed that ceph shows 2 osd's up even if >> the servers are completely powered down: osdmap e95: 8 osds: 2 up, 8 in

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Gregory Farnum
On Tue, Jul 1, 2014 at 11:33 AM, Brian Lovett wrote: > Brian Lovett writes: > > > I restarted all of the osd's and noticed that ceph shows 2 osd's up even if > the servers are completely powered down: osdmap e95: 8 osds: 2 up, 8 in > > Why would that be? The OSDs report each other down much mor

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum writes: > > What's the output of "ceph osd map"? > > Your CRUSH map probably isn't trying to segregate properly, with 2 > hosts and 4 OSDs each. > Software Engineer #42http://inktank.com | http://ceph.com > Is this what you are looking for? ceph osd map rbd ceph osdmap e1

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Brian Lovett writes: I restarted all of the osd's and noticed that ceph shows 2 osd's up even if the servers are completely powered down: osdmap e95: 8 osds: 2 up, 8 in Why would that be? ___ ceph-users mailing list ceph-users@lists.ceph.com ht

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Gregory Farnum
What's the output of "ceph osd map"? Your CRUSH map probably isn't trying to segregate properly, with 2 hosts and 4 OSDs each. Software Engineer #42 @ http://inktank.com | http://ceph.com On Tue, Jul 1, 2014 at 11:22 AM, Brian Lovett wrote: > I'm pulling my hair out with ceph. I am testing thin

[ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
I'm pulling my hair out with ceph. I am testing things with a 5 server cluster. I have 3 monitors, and two storage machines each with 4 osd's. I have started from scratch 4 times now, and can't seem to figure out how to get a clean status. Ceph health reports: HEALTH_WARN 34 pgs degraded; 192