Re: [ceph-users] Dependency issues in fresh ceph/CentOS 7 install

2014-07-28 Thread Brian Lovett
repos instead. > > I then skip the ceph-deploy install step, as I've already done this bit > on each of my ceph nodes. This also stops ceph-deploy from overwriting > my own repo definitions. > > HTH, > Simon. > > On 28/07/14 16:46, Brian Lovett wrote: > >

[ceph-users] Dependency issues in fresh ceph/CentOS 7 install

2014-07-28 Thread Brian Lovett
I'm installing the latest firefly on a fresh centos 7 machine using the rhel 7 yum repo. I'm getting a few dependency issues when using ceph-deploy install. Mostly it looks like it doesn't like python 2.7. [monitor01][DEBUG ] --> Processing Dependency: libboost_system-mt.so.5() (64bit) for pack

[ceph-users] Which OS for fresh install?

2014-07-23 Thread Brian Lovett
I'm evaluating ceph for our new private and public cloud environment. I have a "working" ceph cluster running on centos 6.5, but have had a heck of a time figuring out how to get rbd support to connect to cloudstack. Today I found out that the default kernel is too old, and while I could compile

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Alright, I was finally able to get this resolved without adding another node. As pointed out, even though I had a config variable that defined the default replicated size at 2, ceph for some reason created the default pools (data, and metadata) with a value of 3. After digging trough documentat

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Christian Balzer writes: > Read EVERYTHING you can find about crushmap rules. > > The quickstart (I think) talks about 3 storage nodes, not OSDs. > > Ceph is quite good when it comes to defining failure domains, the default > is to segregate at the storage node level. > What good is a replicati

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Gregory Farnum writes: > > On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett > wrote: > > "profile": "bobtail", > > Okay. That's unusual. What's the oldest client you need to support, > and what Ceph version are you using? You proba

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Christian Balzer writes: > So either make sure these pools really have a replication of 2 by deleting > and re-creating them or add a third storage node. I just executed "ceph osd pool set {POOL} size 2" for both pools. Anything else I need to do? I still don't see any changes to the status

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum writes: > > On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett > wrote: > > "profile": "bobtail", > > Okay. That's unusual. What's the oldest client you need to support, > and what Ceph version are you using? This is a

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum writes: > So those disks are actually different sizes, in proportion to their > weights? It could be having an impact on this, although it *shouldn't* > be an issue. And your tree looks like it's correct, which leaves me > thinking that something is off about your crush rules. :/ >

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum writes: > ...and one more time, because apparently my brain's out to lunch today: > > ceph osd tree > > *sigh* > haha, we all have those days. [root@monitor01 ceph]# ceph osd tree # idweight type name up/down reweight -1 14.48 root default -2 7.24

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Gregory Farnum writes: > > What's the output of "ceph osd map"? > > Your CRUSH map probably isn't trying to segregate properly, with 2 > hosts and 4 OSDs each. > Software Engineer #42http://inktank.com | http://ceph.com > Is this what you are looking for? ceph osd map rbd ceph osdmap e1

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
Brian Lovett writes: I restarted all of the osd's and noticed that ceph shows 2 osd's up even if the servers are completely powered down: osdmap e95: 8 osds: 2 up, 8 in Why would that be? ___ ceph-users mailing list ceph-users@list

[ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-01 Thread Brian Lovett
I'm pulling my hair out with ceph. I am testing things with a 5 server cluster. I have 3 monitors, and two storage machines each with 4 osd's. I have started from scratch 4 times now, and can't seem to figure out how to get a clean status. Ceph health reports: HEALTH_WARN 34 pgs degraded; 192