On Aug 4, 2014, at 10:53 PM, Christian Balzer wrote:
> On Mon, 4 Aug 2014 15:11:39 -0400 Chris Kitzmiller wrote:
>> On Aug 2, 2014, at 12:03 AM, Christian Balzer wrote:
>>> On Fri, 1 Aug 2014 14:23:28 -0400 Chris Kitzmiller wrote:
I have 3 nodes each running a MON and 30 OSDs.
...
W
On Mon, 4 Aug 2014 15:11:39 -0400 Chris Kitzmiller wrote:
> On Aug 2, 2014, at 12:03 AM, Christian Balzer wrote:
> > On Fri, 1 Aug 2014 14:23:28 -0400 Chris Kitzmiller wrote:
> >
> >> I have 3 nodes each running a MON and 30 OSDs.
> >
> > Given the HW you list below, that might be a tall order,
On Aug 2, 2014, at 12:03 AM, Christian Balzer wrote:
> On Fri, 1 Aug 2014 14:23:28 -0400 Chris Kitzmiller wrote:
>
>> I have 3 nodes each running a MON and 30 OSDs.
>
> Given the HW you list below, that might be a tall order, particular CPU
> wise in certain situations.
I'm not seeing any drama
Hello,
On Fri, 1 Aug 2014 14:23:28 -0400 Chris Kitzmiller wrote:
> I have 3 nodes each running a MON and 30 OSDs.
Given the HW you list below, that might be a tall order, particular CPU
wise in certain situations.
What is your OS running off, HDDs or SSDs?
The leveldbs, for the MONs in parti
I have 3 nodes each running a MON and 30 OSDs. When I test my cluster with
either rados bench or with fio via a 10GbE client using RBD I get great initial
speeds >900MBps and I max out my 10GbE links for a while. Then, something goes
wrong the performance falters and the cluster stops responding