Re: [ceph-users] anti-cephalopod question

2014-07-29 Thread Christian Balzer
Hello, On Tue, 29 Jul 2014 06:33:14 -0400 Robert Fantini wrote: > Christian - > Thank you for the answer, I'll get around to reading 'Crush Maps ' a > few times , it is important to have a good understanding of ceph parts. > > So another question - > > As long as I keep the same number

[ceph-users] v0.83 released

2014-07-29 Thread Sage Weil
Another Ceph development release! This has been a longer cycle, so there has been quite a bit of bug fixing and stabilization in this round. There is also a bunch of packaging fixes for RPM distros (RHEL/CentOS, Fedora, and SUSE) and for systemd. We've also added a new librados-striper libra

Re: [ceph-users] Force CRUSH to select specific osd as primary

2014-07-29 Thread Szymon Zacher
Sound good, but there is one problem. In my case I'll have as many hosts as pools used by some piece of software (via librados), and for performance purposes I want to put my primary osd for each pool on the same host as software. In that scenario I'll end up with as many new roots as hosts and I t

[ceph-users] v0.80.5 Firefly released

2014-07-29 Thread Sage Weil
This release fixes a few important bugs in the radosgw and fixes several packaging and environment issues, including OSD log rotation, systemd environments, and daemon restarts on upgrade. We recommend that all v0.80.x Firefly users upgrade, particularly if they are using upstart, systemd, or r

Re: [ceph-users] Force CRUSH to select specific osd as primary

2014-07-29 Thread Gregory Farnum
You could create a new root bucket which contains hosts 2 and 3; then use it instead of "default" in your special rule. That's probably what you want anyway (rather than potentially having two copies of the data on host 1). -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Tue

Re: [ceph-users] Dependency issues in fresh ceph/CentOS 7 install

2014-07-29 Thread Alfredo Deza
Can you paste me the whole output of the install? I am curious why/how you are getting el7 and el6 packages. On Tue, Jul 29, 2014 at 10:21 AM, Brian Lovett wrote: > 1.5.9.It was the latest version as of yesterday. > > *Brian Lovett* > *CEO Prosperent.com* > > > * * >

[ceph-users] Force CRUSH to select specific osd as primary

2014-07-29 Thread Szymon Zacher
I have 3 osd on 3 different hosts: host1 host2 and host3. I'm trying to force CRUSH to use osd on host1 as primary for one of my pools. I can't use primary-affinity because i don't want to set this osd as primary for all my pools. I try to create simple CRUSH rule, which should select osd on host1

Re: [ceph-users] [SOLVED] MON segfaulting when setting a crush ruleset to a pool (firefly 0.80.4)

2014-07-29 Thread Olivier DELHOMME
Hi, Sorry first mail showed up too quickly :/ - Mail original - > De: "Olivier DELHOMME" > À: ceph-users@lists.ceph.com > Envoyé: Jeudi 24 Juillet 2014 21:29:40 > Objet: Re: [ceph-users] MON segfaulting when setting a crush ruleset to a > pool (firefly 0.80.4) [...] I answer to myself

Re: [ceph-users] Dependency issues in fresh ceph/CentOS 7 install

2014-07-29 Thread Alfredo Deza
Just went through the output a couple more times and noted that you have a mix of 'el6' and 'el7' packages: > [monitor01][DEBUG ] ---> Package mesa-libglapi.x86_64 0:9.2.5-5.20131218.el7 > will be installed > [monitor01][DEBUG ] ---> Package python-ceph.x86_64 0:0.80.4-0.el6 will be > installed

Re: [ceph-users] [SOLVED] MON segfaulting when setting a crush ruleset to a pool (firefly 0.80.4)

2014-07-29 Thread Olivier DELHOMME
Hi, - Mail original - > De: "Olivier DELHOMME" > À: ceph-users@lists.ceph.com > Envoyé: Jeudi 24 Juillet 2014 21:29:40 > Objet: Re: [ceph-users] MON segfaulting when setting a crush ruleset to a > pool (firefly 0.80.4) [...] I answer to myself as I found what was wrong. > > > $ ceph o

Re: [ceph-users] Dependency issues in fresh ceph/CentOS 7 install

2014-07-29 Thread Alfredo Deza
On Mon, Jul 28, 2014 at 12:45 PM, Brian Lovett wrote: > Simon Ironside writes: > >> >> Hi Brian, >> >> I have a fresh install working on RHEL 7 running the same version of >> python as you. I did have trouble installing from the ceph.com yum repos >> though and worked around it by creating and in

Re: [ceph-users] anti-cephalopod question

2014-07-29 Thread Robert Fantini
Christian - Thank you for the answer, I'll get around to reading 'Crush Maps ' a few times , it is important to have a good understanding of ceph parts. So another question - As long as I keep the same number of nodes in both rooms, will firefly defaults keep data balanced? If not I'll

Re: [ceph-users] firefly osds stuck in state booting

2014-07-29 Thread 10 minus
Hi Karan , Thanks .. that did the trick .. The magic word was "in" regarding rep size . I have adjusted them my settings are --snip-- osd pool default size = 2 osd pool default min size = 1 osd pool default pg num = 100 osd pool default pgp num = 100 --snip-- # Also in the meantime I had chanc