On Sun, Oct 23, 2016 at 10:45 PM, David Turner <
david.tur...@storagecraft.com> wrote:

> 1/3 of your raw data on the osds will be deleted and then it move a bunch
> around. I haven't done it personally, but I would guess somewhere in the
> range of 50-70% data movement.  It will depend on how many pgs you have,
> failure domains (hosts by default), etc.
>

Maybe I'm seriously misremembering how this works, but I would not expect
any data movement solely as the result of a pool reducing its size (replica
count). Everything will have to re-peer but otherwise it should just remove
OSDs from the up/acting sets.
Obviously *increasing* the size means you'll have to re-replicate and that
will do all kinds of things, but it still won't be a rebalance — just lots
and lots of data being sent across the network to up the replica counts.
-Greg


>
> Sent from my iPhone
>
>
> > On Oct 23, 2016, at 9:56 AM, Sebastian Köhler <s...@tyrion.de> wrote:
> >
> > Thanks for the help, is there any information available on how much data
> > movement will happen when I reduce the size from 3 to 2? The min_size is
> > already at 1.
> >
> >> On 10/23/2016 05:43 PM, David Turner wrote:
> >> Make sure to also adjust your min_size. Having those be the same number
> >> can cause issues if and when you lose an osd from your cluster.
> >>
> >> Like Wido said, you can change the size of a replica pool at any time,
> >> it will just cause a lot of data to move.
> >>
> >> Sent from my iPhone
> >>
> >>> On Oct 23, 2016, at 4:30 AM, Wido den Hollander <w...@42on.com> wrote:
> >>>
> >>>
> >>>> Op 23 oktober 2016 om 10:04 schreef Sebastian Köhler <s...@tyrion.de>:
> >>>>
> >>>>
> >>>> Hello,
> >>>>
> >>>> is it possible to reduce the replica count of a pool that already
> >>>> contains data? If it is possible how much load will a change in the
> >>>> replica size cause? I am guessing it will do a rebalance.
> >>>>
> >>>
> >>> Yes, just change the 'size' parameter of the pool. Data will rebalance
> >> indeed if you increase the number.
> >>>
> >>> Wido
> >>>
> >>>> Thanks
> >>>>
> >>>> Sebastian
> >>>>
> >>>>
> >>
> >> ------------------------------------------------------------
> ------------
> >>
> >> <https://storagecraft.com>    David Turner | Cloud Operations Engineer
> |
> >> StorageCraft Technology Corporation <https://storagecraft.com>
> >> 380 Data Drive Suite 300 | Draper | Utah | 84020
> >> Office: 801.871.2760| Mobile: 385.224.2943
> >>
> >> ------------------------------------------------------------
> ------------
> >>
> >> If you are not the intended recipient of this message or received it
> >> erroneously, please notify the sender and delete it, together with any
> >> attachments, and be advised that any dissemination or copying of this
> >> message is prohibited.
> >>
> >> ------------------------------------------------------------
> ------------
> >>
> >>
>
> ------------------------------
>
> <https://storagecraft.com> David Turner | Cloud Operations Engineer | 
> StorageCraft
> Technology Corporation <https://storagecraft.com>
> 380 Data Drive Suite 300 | Draper | Utah | 84020
> Office: 801.871.2760 | Mobile: 385.224.2943
>
> ------------------------------
>
> If you are not the intended recipient of this message or received it
> erroneously, please notify the sender and delete it, together with any
> attachments, and be advised that any dissemination or copying of this
> message is prohibited.
>
> ------------------------------
>
> _______________________________________________
> >>>> ceph-users mailing list
> >>>> ceph-users@lists.ceph.com
> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>> _______________________________________________
> >>> ceph-users mailing list
> >>> ceph-users@lists.ceph.com
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to