Hello,
is it possible to reduce the replica count of a pool that already
contains data? If it is possible how much load will a change in the
replica size cause? I am guessing it will do a rebalance.
Thanks
Sebastian
signature.asc
Description: OpenPGP digital signature
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Zoltan Arnold Nagy
> Sent: 22 October 2016 15:13
> To: ceph-users
> Subject: [ceph-users] cache tiering deprecated in RHCS 2.0
>
> Hi,
>
> The 2.0 release notes for Red Hat Ceph Storage dep
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Robert Sanders
> Sent: 22 October 2016 03:44
> To: ceph-us...@ceph.com
> Subject: [ceph-users] Three tier cache
>
> Hello,
>
> Is it possible to create a three level cache tier? Searching d
> Op 23 oktober 2016 om 10:04 schreef Sebastian Köhler :
>
>
> Hello,
>
> is it possible to reduce the replica count of a pool that already
> contains data? If it is possible how much load will a change in the
> replica size cause? I am guessing it will do a rebalance.
>
Yes, just change the
>>
>> Is it possible to create a three level cache tier? Searching documentation
>> and archives suggests that I’m not the first one to ask about
>> it, but I can’t tell if it is supported yet.
>
> At this time, this is not possible. I'm afraid I'm not aware of any short
> term plans for this
> On Oct 23, 2016, at 4:32 AM, Nick Fisk wrote:
>
> Unofficial answer but I suspect it is probably correct.
>
> Before Jewel (and later hammer releases), cache tiering reduced performance
> in pretty much all cases.
In it’s current state does this still hold true? I’ve been spending a lot
Make sure to also adjust your min_size. Having those be the same number can
cause issues if and when you lose an osd from your cluster.
Like Wido said, you can change the size of a replica pool at any time, it will
just cause a lot of data to move.
Sent from my iPhone
> On Oct 23, 2016, at 4:3
Thanks for the help, is there any information available on how much data
movement will happen when I reduce the size from 3 to 2? The min_size is
already at 1.
On 10/23/2016 05:43 PM, David Turner wrote:
> Make sure to also adjust your min_size. Having those be the same number
> can cause issues i
From: Robert Sanders [mailto:rlsand...@gmail.com]
Sent: 23 October 2016 16:32
To: n...@fisk.me.uk
Cc: ceph-users
Subject: Re: [ceph-users] cache tiering deprecated in RHCS 2.0
On Oct 23, 2016, at 4:32 AM, Nick Fisk mailto:n...@fisk.me.uk> > wrote:
Unofficial answer but I suspect it
Hello,
On Sat, 22 Oct 2016 16:12:37 +0200 Zoltan Arnold Nagy wrote:
> Hi,
>
> The 2.0 release notes for Red Hat Ceph Storage deprecate cache tiering.
>
> What does this mean for Jewel and especially going forward?
>
Lets look at that statement in the release notes:
---
The RADOS-level cache
Hello,
On Fri, 21 Oct 2016 17:44:25 + Jim Kilborn wrote:
> Reed/Christian,
>
> So if I put the OSD journals on an SSD that has power loss protection
> (Samsung SM863) , all the write then go through those journals. Can I then
> leave write caching turn on for the spinner OSDs, even withou
Hello,
This may come across as a simple question but just wanted to check.
I am looking at importing live data from my cluster via ceph -s e.t.c into a
graphical graph interface so I can monitor performance / iops / e.t.c overtime.
I am looking to pull this data from one or more monitor nodes,
1/3 of your raw data on the osds will be deleted and then it move a bunch
around. I haven't done it personally, but I would guess somewhere in the range
of 50-70% data movement. It will depend on how many pgs you have, failure
domains (hosts by default), etc.
Sent from my iPhone
> On Oct 23,
13 matches
Mail list logo