Paul, thank you!

Do you mean that value?
total_space      75.3TiB

Could you tell me where I can read about algorithm of calculating of worst
case scenario?

*rados df out:*
POOL_NAME    USED    OBJECTS CLONES COPIES  MISSING_ON_PRIMARY UNFOUND
DEGRADED RD_OPS      RD      WR_OPS      WR
ala01vf01p01 16.7TiB 4411305  58090 8822610                  0       0
   0  2819034334  421TiB   931369992 1.43PiB
ala01vf01p02 1.35TiB  358796      0  717592                  0       0
   0  1109215060 57.5TiB  1286147031 38.5TiB
cachepool    1.60TiB  420974      5  841948                  0       0
   0 21677037783 2.61PiB 16048037966 2.06PiB

total_objects    5191075
total_used       39.3TiB
total_avail      36.0TiB
*total_space      75.3TiB*

вт, 1 окт. 2019 г. в 01:46, Paul Emmerich <paul.emmer...@croit.io>:

> ceph df shows a worst-case estimate based on current data
> distribution, check "rados df" for more "raw" counts
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Mon, Sep 30, 2019 at 7:49 PM Ilmir Mulyukov <ilmir.mulyu...@gmail.com>
> wrote:
> >
> > Hello!
> > We have a small proxmox farm with
> > ceph consisting of three nodes.
> > Each node has 6 disks each with a capacity of 4 TB.
> > A only one pool has been created on these disks.
> > Size 2/1.
> > In theory, this pool should have a capacity: 32.74 TB
> > But the ceph df command returns only: 22.4 TB (USED + MAX AVAIL)(16.7 +
> 5.7)
> >
> > How to explain this difference?
> >
> > ceph version is: 12.2.12-pve1
> > ceph df command out:
> > POOLS:
> >     NAME             ID     QUOTA OBJECTS     QUOTA BYTES     USED
>   %USED     MAX AVAIL     OBJECTS     DIRTY       READ        WRITE
>  RAW USED
> >     ala01vf01p01     7      N/A               N/A             16.7TiB
>  74.53       5.70TiB     4411119       4.41M     2.62GiB      887MiB
> 33.4TiB
> >
> > crush map:
> > host n01vf01 {
> > id -3 # do not change unnecessarily
> > id -4 class hdd # do not change unnecessarily
> > id -18 class nvme # do not change unnecessarily
> > # weight 22.014
> > alg straw2
> > hash 0 # rjenkins1
> > item osd.0 weight 3.669
> > item osd.13 weight 3.669
> > item osd.14 weight 3.669
> > item osd.15 weight 3.669
> > item osd.16 weight 3.669
> > item osd.17 weight 3.669
> > }
> > host n02vf01 {
> > id -5 # do not change unnecessarily
> > id -6 class hdd # do not change unnecessarily
> > id -19 class nvme # do not change unnecessarily
> > # weight 22.014
> > alg straw2
> > hash 0 # rjenkins1
> > item osd.1 weight 3.669
> > item osd.8 weight 3.669
> > item osd.9 weight 3.669
> > item osd.10 weight 3.669
> > item osd.11 weight 3.669
> > item osd.12 weight 3.669
> > }
> > host n04vf01 {
> > id -34 # do not change unnecessarily
> > id -35 class hdd # do not change unnecessarily
> > id -36 class nvme # do not change unnecessarily
> > # weight 22.014
> > alg straw2
> > hash 0 # rjenkins1
> > item osd.7 weight 3.669
> > item osd.27 weight 3.669
> > item osd.24 weight 3.669
> > item osd.25 weight 3.669
> > item osd.26 weight 3.669
> > item osd.28 weight 3.669
> > }
> > root default {
> > id -1 # do not change unnecessarily
> > id -2 class hdd # do not change unnecessarily
> > id -21 class nvme # do not change unnecessarily
> > # weight 66.042
> > alg straw2
> > hash 0 # rjenkins1
> > item n01vf01 weight 22.014
> > item n02vf01 weight 22.014
> > item n04vf01 weight 22.014
> > }
> >
> > rule replicated_rule {
> > id 0
> > type replicated
> > min_size 1
> > max_size 10
> > step take default
> > step chooseleaf firstn 0 type host
> > step emit
> > }
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to