Hello!
On Fri, Jun 10, 2016 at 07:38:10AM +0530, swamireddy wrote:
> Blair - Thanks for the details. I used to set the low priority for
> recovery during the rebalance/recovery activity.
> Even though I set the recovery_priority as 5 (instead of 1) and
> client-op_priority set as 63, some of my
Thanks Blair. Yes, will plan to upgrade my cluster.
Thanks
Swami
On Fri, Jun 10, 2016 at 7:40 AM, Blair Bethwaite
wrote:
> Hi Swami,
>
> That's a known issue, which I believe is much improved in Jewel thanks
> to a priority queue added somewhere in the OSD op path (I think). If I
> were you I'd
Hi Swami,
That's a known issue, which I believe is much improved in Jewel thanks
to a priority queue added somewhere in the OSD op path (I think). If I
were you I'd be planning to get off Firefly and upgrade.
Cheers,
On 10 June 2016 at 12:08, M Ranga Swami Reddy wrote:
> Blair - Thanks for the
Blair - Thanks for the details. I used to set the low priority for
recovery during the rebalance/recovery activity.
Even though I set the recovery_priority as 5 (instead of 1) and
client-op_priority set as 63, some of my customers complained that
their VMs are not reachable for a few mins/secs duri
Swami,
Run it with the help option for more context:
"./crush-reweight-by-utilization.py --help". In your example below
it's reporting to you what changes it would make to your OSD reweight
values based on the default option settings (because you didn't
specify any options). To make the script act
Hi Blari,
I ran the script and results are below:
==
./crush-reweight-by-utilization.py
average_util: 0.587024, overload_util: 0.704429, underload_util: 0.587024.
reweighted:
43 (0.852690 >= 0.704429) [1.00 -> 0.95]
238 (0.845154 >= 0.704429) [1.00 -> 0.95]
104 (0.827908 >= 0.704429
Thats great.. Will try this..
Thanks
Swami
On Wed, Jun 8, 2016 at 10:38 AM, Blair Bethwaite
wrote:
> It runs by default in dry-run mode, which IMHO opinion should be the
> default for operations like this. IIRC you add "-d -r" to make it
> actually apply the re-weighting.
>
> Cheers,
>
> On 8 Ju
It runs by default in dry-run mode, which IMHO opinion should be the
default for operations like this. IIRC you add "-d -r" to make it
actually apply the re-weighting.
Cheers,
On 8 June 2016 at 15:04, M Ranga Swami Reddy wrote:
> Blair - Thanks for the script...Btw, is this script has option for
Blair - Thanks for the script...Btw, is this script has option for dry run?
Thanks
Swami
On Wed, Jun 8, 2016 at 6:35 AM, Blair Bethwaite
wrote:
> Swami,
>
> Try
> https://github.com/cernceph/ceph-scripts/blob/master/tools/crush-reweight-by-utilization.py,
> that'll work with Firefly and allow y
Swami,
Try
https://github.com/cernceph/ceph-scripts/blob/master/tools/crush-reweight-by-utilization.py,
that'll work with Firefly and allow you to only tune down weight of a
specific number of overfull OSDs.
Cheers,
On 7 June 2016 at 23:11, M Ranga Swami Reddy wrote:
> OK, understood...
> To f
In my cluster:
351 OSDs with same size and 8192 pgs per pool. And 60% RAW space used.
Thanks
Swami
On Tue, Jun 7, 2016 at 7:22 PM, Corentin Bonneton wrote:
> Hello,
> You how much your PG pools since he first saw you have left too big.
>
> --
> Cordialement,
> Corentin BONNETON
>
>
> Le 7 juin
Behalf Of Sage
Weil
Sent: Dienstag, 7. Juni 2016 15:22
To: M Ranga Swami Reddy
Cc: ceph-devel ; ceph-users
Subject: Re: [ceph-users] un-even data filled on OSDs
On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
> OK, understood...
> To fix the nearfull warn, I am reducing the weight of a specif
Hello,
You how much your PG pools since he first saw you have left too big.
--
Cordialement,
Corentin BONNETON
> Le 7 juin 2016 à 15:21, Sage Weil a écrit :
>
> On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
>> OK, understood...
>> To fix the nearfull warn, I am reducing the weight of a specif
On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
> OK, understood...
> To fix the nearfull warn, I am reducing the weight of a specific OSD,
> which filled >85%..
> Is this work-around advisable?
Sure. This is what reweight-by-utilization does for you, but
automatically.
sage
>
> Thanks
> Swami
OK, understood...
To fix the nearfull warn, I am reducing the weight of a specific OSD,
which filled >85%..
Is this work-around advisable?
Thanks
Swami
On Tue, Jun 7, 2016 at 6:37 PM, Sage Weil wrote:
> On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
>> Hi Sage,
>> >Jewel and the latest hammer po
On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
> Hi Sage,
> >Jewel and the latest hammer point release have an improved
> >reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry
> > run) to correct this.
>
> Thank youBut not planning to upgrade the cluster soon.
> So, in thi
Hi Sage,
>Jewel and the latest hammer point release have an improved
>reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry
> run) to correct this.
Thank youBut not planning to upgrade the cluster soon.
So, in this case - are there any tunable options will help? like
"crush
On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
> Hello,
> I have aorund 100 OSDs in my ceph cluster. In this a few OSDs filled
> with >85% of data and few OSDs filled with ~60%-70% of data.
>
> Any reason why the unevenly OSDs filling happned? do I need to any
> tweaks on configuration to fix the
Hello,
I have aorund 100 OSDs in my ceph cluster. In this a few OSDs filled
with >85% of data and few OSDs filled with ~60%-70% of data.
Any reason why the unevenly OSDs filling happned? do I need to any
tweaks on configuration to fix the above? Please advise.
PS: Ceph version is - 0.80.7
Thanks
19 matches
Mail list logo