-users-boun...@lists.ceph.com [mailto:ceph-users-
>boun...@lists.ceph.com] On Behalf Of Niklas Goerke
>Sent: Friday, September 27, 2013 4:29 AM
>To: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] PG distribution scattered
>
>Sorry for replying only now, I did not get to try it earlier…
Sorry for replying only now, I did not get to try it earlier…
On Thu, 19 Sep 2013 08:43:11 -0500, Mark Nelson wrote:
On 09/19/2013 08:36 AM, Niklas Goerke wrote:
[…]
My Setup:
* Two Hosts with 45 Disks each --> 90 OSDs
* Only one newly created pool with 4500 PGs and a Replica Size of 2
-->
s
On Sep 19, 2013, at 3:43 PM, Mark Nelson wrote:
> If you set:
>
> osd pool default flag hashpspool = true
>
> Theoretically that will cause different pools to be distributed more randomly.
The name seems to imply that it should be settable per pool. Is that possible
now?
If set globally, do
It will not lose any of your data. But it will try and move pretty much all
of it, which will probably send performance down the toilet.
-Greg
On Thursday, September 19, 2013, Mark Nelson wrote:
> Honestly I don't remember, but I would be wary if it's not a test system.
> :)
>
> Mark
>
> On 09/19
Good timing then. I just fired up the cluster 2 days ago. Thanks.
--
Warren
On Sep 19, 2013, at 12:34 PM, Gregory Farnum wrote:
> It will not lose any of your data. But it will try and move pretty much all
> of it, which will probably send performance down the toilet.
> -Greg
>
> On Thursday
Honestly I don't remember, but I would be wary if it's not a test system. :)
Mark
On 09/19/2013 11:28 AM, Warren Wang wrote:
Is this safe to enable on a running cluster?
--
Warren
On Sep 19, 2013, at 9:43 AM, Mark Nelson wrote:
On 09/19/2013 08:36 AM, Niklas Goerke wrote:
Hi there
I'm cu
Is this safe to enable on a running cluster?
--
Warren
On Sep 19, 2013, at 9:43 AM, Mark Nelson wrote:
> On 09/19/2013 08:36 AM, Niklas Goerke wrote:
>> Hi there
>>
>> I'm currently evaluating ceph and started filling my cluster for the
>> first time. After filling it up to about 75%, it repor
On 09/19/2013 08:36 AM, Niklas Goerke wrote:
Hi there
I'm currently evaluating ceph and started filling my cluster for the
first time. After filling it up to about 75%, it reported some OSDs
being "near-full".
After some evaluation I found that the PGs are not distributed evenly
over all the osd
Hi there
I'm currently evaluating ceph and started filling my cluster for the
first time. After filling it up to about 75%, it reported some OSDs
being "near-full".
After some evaluation I found that the PGs are not distributed evenly
over all the osds.
My Setup:
* Two Hosts with 45 Disks ea