Re: [ceph-users] question about monitor and paxos relationship

2014-08-30 Thread pragya jain
Thanks Greg, Joao and David, The concept why odd no. of monitors are preferred is clear to me, but still I am not clear about the working of Paxos algorithm: #1. All changes in any data structure of monitor whether it is monitor map, OSD map, PG map, MDS map or CRUSH map; are made through Paxo

Re: [ceph-users] question about monitor and paxos relationship

2014-08-30 Thread Joao Eduardo Luis
On 08/30/2014 08:03 AM, pragya jain wrote: Thanks Greg, Joao and David, The concept why odd no. of monitors are preferred is clear to me, but still I am not clear about the working of Paxos algorithm: #1. All changes in any data structure of monitor whether it is monitor map, OSD map, PG map, M

Re: [ceph-users] Uneven OSD usage

2014-08-30 Thread J David
On Fri, Aug 29, 2014 at 2:53 AM, Christian Balzer wrote: >> Now, 1200 is not a power of two, but it makes sense. (12 x 100). > Should have been 600 and then upped to 1024. At the time, there was a reason why doing that did not work, but I don't remember the specifics. All messages sent back in

[ceph-users] Asked for emperor, got firefly. (You can't take the sky from me?)

2014-08-30 Thread J David
While adding some nodes to a ceph emperor cluster using ceph-deploy, the new nodes somehow wound up with 0.80.1, which I think is a Firefly release. The ceph version on existing nodes: $ ceph --version ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60) The repository on the new nodes

Re: [ceph-users] question about monitor and paxos relationship

2014-08-30 Thread Joao Eduardo Luis
Nigel mistakenly replied just to me, CC'ing the list. On 08/30/2014 08:12 AM, Nigel Williams wrote: On Sat, Aug 30, 2014 at 11:59 AM, Joao Eduardo Luis wrote: But yeah, if you're going with 2 or 4, you'll be better off with 3 or 5. As long as you don't go with 1 you should be okay. On a rec

Re: [ceph-users] Asked for emperor, got firefly. (You can't take the sky from me?)

2014-08-30 Thread Christian Balzer
Hello, On Sat, 30 Aug 2014 20:24:00 -0400 J David wrote: > While adding some nodes to a ceph emperor cluster using ceph-deploy, > the new nodes somehow wound up with 0.80.1, which I think is a Firefly > release. > This was asked and solved in the "ceph-deploy with --release (--stable) for dumpli

Re: [ceph-users] Uneven OSD usage

2014-08-30 Thread Christian Balzer
Hello, On Sat, 30 Aug 2014 18:27:22 -0400 J David wrote: > On Fri, Aug 29, 2014 at 2:53 AM, Christian Balzer wrote: > >> Now, 1200 is not a power of two, but it makes sense. (12 x 100). > > Should have been 600 and then upped to 1024. > > At the time, there was a reason why doing that did not

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-30 Thread Mark Kirkwood
On 29/08/14 22:17, Sebastien Han wrote: @Mark thanks trying this :) Unfortunately using nobarrier and another dedicated SSD for the journal (plus your ceph setting) didn’t bring much, now I can reach 3,5K IOPS. By any chance, would it be possible for you to test with a single OSD SSD? Funny