Thanks christian.. Got clear about the concept.. thanks very much :)

On Tue, Nov 11, 2014 at 5:47 PM, Loic Dachary <l...@dachary.org> wrote:

> Hi Christian,
>
> On 11/11/2014 13:09, Christian Balzer wrote:
> > On Tue, 11 Nov 2014 17:14:49 +0530 Mallikarjun Biradar wrote:
> >
> >> Hi all
> >>
> >> When Issued ceph osd dump it displays weight for that osd as 1 and when
> >> issued osd tree it displays 0.35
> >>
> >
> > There are many threads about this, google is your friend. For example:
> > https://www.mail-archive.com/ceph-users@lists.ceph.com/msg11010.html
> >
> > In short, one is the CRUSH weight (usually based on the capacity of the
> > OSD), the other is the OSD weight (or reweight in the tree display).
> >
> > For example think about a cluster with 100 2TB OSDs and you're planning
> to
> > replace them (bit by bit) with 4TB OSDs. But the hard disks are the same
> > speed, so if you would just replace things, more and more data would
> > migrate to your bigger OSDs, making the whole cluster actually slower.
> > Setting the OSD weight (reweight) to 0.5 for the 4TB OSDs (untiil the
> > replacement is complete) will result in them getting the same allocation
> as
> > the 2TB ones, keeping things even.
>
> It is a great example. Would you like to add it to
> http://ceph.com/docs/giant/rados/operations/control/#osd-subsystem ? If
> you do not have time, I volunteer to do it :-)
>
> Cheers
>
> >
> > Christian
> >
> >> output from osd dump:
> >>         { "osd": 20,
> >>           "uuid": "b2a97a29-1b8a-43e4-a4b0-fd9ee351086e",
> >>           "up": 1,
> >>           "in": 1,
> >>           "weight": "1.000000",
> >>           "primary_affinity": "1.000000",
> >>           "last_clean_begin": 0,
> >>           "last_clean_end": 0,
> >>           "up_from": 103,
> >>           "up_thru": 106,
> >>           "down_at": 0,
> >>           "lost_at": 0,
> >>           "public_addr": "10.242.43.116:6820\/27623",
> >>           "cluster_addr": "10.242.43.116:6821\/27623",
> >>           "heartbeat_back_addr": "10.242.43.116:6822\/27623",
> >>           "heartbeat_front_addr": "10.242.43.116:6823\/27623",
> >>           "state": [
> >>                 "exists",
> >>                 "up"]}],
> >>
> >> output from osd tree:
> >> # id    weight  type name       up/down reweight
> >> -1      7.35    root default
> >> -2      2.8             host rack6-storage-5
> >> 0       0.35                    osd.0   up      1
> >> 1       0.35                    osd.1   up      1
> >> 2       0.35                    osd.2   up      1
> >> 3       0.35                    osd.3   up      1
> >> 4       0.35                    osd.4   up      1
> >> 5       0.35                    osd.5   up      1
> >> 6       0.35                    osd.6   up      1
> >> 7       0.35                    osd.7   up      1
> >> -3      2.8             host rack6-storage-4
> >> 8       0.35                    osd.8   up      1
> >> 9       0.35                    osd.9   up      1
> >> 10      0.35                    osd.10  up      1
> >> 11      0.35                    osd.11  up      1
> >> 12      0.35                    osd.12  up      1
> >> 13      0.35                    osd.13  up      1
> >> 14      0.35                    osd.14  up      1
> >> 15      0.35                    osd.15  up      1
> >> -4      1.75            host rack6-storage-6
> >> 16      0.35                    osd.16  up      1
> >> 17      0.35                    osd.17  up      1
> >> 18      0.35                    osd.18  up      1
> >> 19      0.35                    osd.19  up      1
> >> 20      0.35                    osd.20  up      1
> >>
> >> Please help me to understand this
> >>
> >> -regards,
> >> Mallikarjun Biradar
> >
> >
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to