On Thu, Jan 10, 2019 at 4:07 PM Scottix <scot...@gmail.com> wrote:

> I just had this question as well.
>
> I am interested in what you mean by fullest, is it percentage wise or raw
> space. If I have an uneven distribution and adjusted it, would it make more
> space available potentially.
>

Yes - I'd recommend using pg-upmap if all your clients are Luminous+. I
"reclaimed" about 5TB of usable space recently by balancing my PGs.

@Yoanne, you've got a fair bit of variance so you would likely benefit from
pg-upmap (or other rebalancing).


> Thanks
> Scott
> On Thu, Jan 10, 2019 at 12:05 AM Wido den Hollander <w...@42on.com> wrote:
>
>>
>>
>> On 1/9/19 2:33 PM, Yoann Moulin wrote:
>> > Hello,
>> >
>> > I have a CEPH cluster in luminous 12.2.10 dedicated to cephfs.
>> >
>> > The raw size is 65.5 TB, with a replica 3, I should have ~21.8 TB
>> usable.
>> >
>> > But the size of the cephfs view by df is *only* 19 TB, is that normal ?
>> >
>>
>> Yes. Ceph will calculate this based on the fullest OSD. As data
>> distribution is never 100% perfect you will get such numbers.
>>
>> To go from raw to usable I use this calculation:
>>
>> (RAW / 3) * 0.85
>>
>> So yes, I take a 20%, sometimes even 30% buffer.
>>
>> Wido
>>
>> > Best regards,
>> >
>> > here some hopefully useful information :
>> >
>> >> apollo@icadmin004:~$ ceph -s
>> >>   cluster:
>> >>     id:     fc76846a-d0f0-4866-ae6d-d442fc885469
>> >>     health: HEALTH_OK
>> >>
>> >>   services:
>> >>     mon: 3 daemons, quorum icadmin006,icadmin007,icadmin008
>> >>     mgr: icadmin006(active), standbys: icadmin007, icadmin008
>> >>     mds: cephfs-3/3/3 up
>> {0=icadmin008=up:active,1=icadmin007=up:active,2=icadmin006=up:active}
>> >>     osd: 40 osds: 40 up, 40 in
>> >>
>> >>   data:
>> >>     pools:   2 pools, 2560 pgs
>> >>     objects: 26.12M objects, 15.6TiB
>> >>     usage:   49.7TiB used, 15.8TiB / 65.5TiB avail
>> >>     pgs:     2560 active+clean
>> >>
>> >>   io:
>> >>     client:   510B/s rd, 24.1MiB/s wr, 0op/s rd, 35op/s wr
>> >
>> >> apollo@icadmin004:~$ ceph df
>> >> GLOBAL:
>> >>     SIZE        AVAIL       RAW USED     %RAW USED
>> >>     65.5TiB     15.8TiB      49.7TiB         75.94
>> >> POOLS:
>> >>     NAME                ID     USED        %USED     MAX AVAIL
>>  OBJECTS
>> >>     cephfs_data         1      15.6TiB     85.62       2.63TiB
>>  25874848
>> >>     cephfs_metadata     2       571MiB      0.02       2.63TiB
>>  245778
>> >
>> >> apollo@icadmin004:~$ rados df
>> >> POOL_NAME       USED    OBJECTS  CLONES COPIES   MISSING_ON_PRIMARY
>> UNFOUND DEGRADED RD_OPS     RD      WR_OPS   WR
>> >> cephfs_data     15.6TiB 25874848      0 77624544                  0
>>    0        0  324156851 25.9TiB 20114360 9.64TiB
>> >> cephfs_metadata  571MiB   245778      0   737334                  0
>>    0        0 1802713236 87.7TiB 75729412 16.0TiB
>> >>
>> >> total_objects    26120626
>> >> total_used       49.7TiB
>> >> total_avail      15.8TiB
>> >> total_space      65.5TiB
>> >
>> >> apollo@icadmin004:~$ ceph osd pool ls detail
>> >> pool 1 'cephfs_data' replicated size 3 min_size 2 crush_rule 0
>> object_hash rjenkins pg_num 2048 pgp_num 2048 last_change 6197 lfor 0/3885
>> flags hashpspool stripe_width 0 application cephfs
>> >> pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0
>> object_hash rjenkins pg_num 512 pgp_num 512 last_change 6197 lfor 0/703
>> flags hashpspool stripe_width 0 application cephfs
>> >
>> >> apollo@icadmin004:~$ df -h /apollo/
>> >> Filesystem                             Size  Used Avail Use% Mounted on
>> >> 10.90.36.16,10.90.36.17,10.90.36.18:/   19T   16T  2.7T  86% /apollo
>> >
>> >> apollo@icadmin004:~$ ceph fs get cephfs
>> >> Filesystem 'cephfs' (1)
>> >> fs_name      cephfs
>> >> epoch        49277
>> >> flags        c
>> >> created      2018-01-23 14:06:43.460773
>> >> modified     2019-01-09 14:17:08.520888
>> >> tableserver  0
>> >> root 0
>> >> session_timeout      60
>> >> session_autoclose    300
>> >> max_file_size        1099511627776
>> >> last_failure 0
>> >> last_failure_osd_epoch       6216
>> >> compat       compat={},rocompat={},incompat={1=base v0.20,2=client
>> writeable ranges,3=default file layouts on dirs,4=dir inode in separate
>> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
>> anchor table,9=file layout v2}
>> >> max_mds      3
>> >> in   0,1,2
>> >> up   {0=424203,1=424158,2=424146}
>> >> failed
>> >> damaged
>> >> stopped
>> >> data_pools   [1]
>> >> metadata_pool        2
>> >> inline_data  disabled
>> >> balancer
>> >> standby_count_wanted 0
>> >> 424203:      10.90.36.18:6800/3885954695 'icadmin008' mds.0.49202
>> up:active seq 6 export_targets=1,2
>> >> 424158:      10.90.36.17:6800/152758094 'icadmin007' mds.1.49198
>> up:active seq 16 export_targets=0,2
>> >> 424146:      10.90.36.16:6801/1771587593 'icadmin006' mds.2.49195
>> up:active seq 19 export_targets=0
>> >
>> >> apollo@icadmin004:~$ ceph osd tree
>> >> ID  CLASS WEIGHT   TYPE NAME             STATUS REWEIGHT PRI-AFF
>> >>  -1       65.49561 root default
>> >>  -7        3.27478     host iccluster150
>> >> 160   hdd  1.63739         osd.160           up  1.00000 1.00000
>> >> 165   hdd  1.63739         osd.165           up  1.00000 1.00000
>> >> -11        3.27478     host iccluster151
>> >> 163   hdd  1.63739         osd.163           up  1.00000 1.00000
>> >> 168   hdd  1.63739         osd.168           up  1.00000 1.00000
>> >>  -5        3.27478     host iccluster152
>> >> 164   hdd  1.63739         osd.164           up  1.00000 1.00000
>> >> 169   hdd  1.63739         osd.169           up  1.00000 1.00000
>> >>  -9        3.27478     host iccluster153
>> >> 162   hdd  1.63739         osd.162           up  1.00000 1.00000
>> >> 167   hdd  1.63739         osd.167           up  1.00000 1.00000
>> >>  -3        3.27478     host iccluster154
>> >> 161   hdd  1.63739         osd.161           up  1.00000 1.00000
>> >> 166   hdd  1.63739         osd.166           up  1.00000 1.00000
>> >> -17        3.27478     host iccluster155
>> >> 170   hdd  1.63739         osd.170           up  1.00000 1.00000
>> >> 176   hdd  1.63739         osd.176           up  1.00000 1.00000
>> >> -21        3.27478     host iccluster156
>> >> 171   hdd  1.63739         osd.171           up  1.00000 1.00000
>> >> 177   hdd  1.63739         osd.177           up  1.00000 1.00000
>> >> -13        3.27478     host iccluster157
>> >> 172   hdd  1.63739         osd.172           up  1.00000 1.00000
>> >> 178   hdd  1.63739         osd.178           up  1.00000 1.00000
>> >> -15        3.27478     host iccluster158
>> >> 173   hdd  1.63739         osd.173           up  1.00000 1.00000
>> >> 179   hdd  1.63739         osd.179           up  0.90002 1.00000
>> >> -19        3.27478     host iccluster159
>> >> 174   hdd  1.63739         osd.174           up  0.95001 1.00000
>> >> 175   hdd  1.63739         osd.175           up  1.00000 1.00000
>> >> -23        3.27478     host iccluster160
>> >> 180   hdd  1.63739         osd.180           up  1.00000 1.00000
>> >> 185   hdd  1.63739         osd.185           up  1.00000 1.00000
>> >> -25        3.27478     host iccluster161
>> >> 181   hdd  1.63739         osd.181           up  1.00000 1.00000
>> >> 186   hdd  1.63739         osd.186           up  1.00000 1.00000
>> >> -27        3.27478     host iccluster162
>> >> 182   hdd  1.63739         osd.182           up  1.00000 1.00000
>> >> 187   hdd  1.63739         osd.187           up  1.00000 1.00000
>> >> -29        3.27478     host iccluster163
>> >> 183   hdd  1.63739         osd.183           up  1.00000 1.00000
>> >> 189   hdd  1.63739         osd.189           up  1.00000 1.00000
>> >> -31        3.27478     host iccluster164
>> >> 184   hdd  1.63739         osd.184           up  1.00000 1.00000
>> >> 188   hdd  1.63739         osd.188           up  1.00000 1.00000
>> >> -33        3.27478     host iccluster165
>> >> 190   hdd  1.63739         osd.190           up  1.00000 1.00000
>> >> 195   hdd  1.63739         osd.195           up  1.00000 1.00000
>> >> -35        3.27478     host iccluster166
>> >> 191   hdd  1.63739         osd.191           up  1.00000 1.00000
>> >> 197   hdd  1.63739         osd.197           up  1.00000 1.00000
>> >> -39        3.27478     host iccluster167
>> >> 192   hdd  1.63739         osd.192           up  1.00000 1.00000
>> >> 196   hdd  1.63739         osd.196           up  1.00000 1.00000
>> >> -37        3.27478     host iccluster168
>> >> 193   hdd  1.63739         osd.193           up  1.00000 1.00000
>> >> 198   hdd  1.63739         osd.198           up  1.00000 1.00000
>> >> -41        3.27478     host iccluster169
>> >> 194   hdd  1.63739         osd.194           up  1.00000 1.00000
>> >> 199   hdd  1.63739         osd.199           up  1.00000 1.00000
>> >
>> >> apollo@icadmin004:~$ ceph osd df
>> >> ID  CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS
>> >> 160   hdd 1.63739  1.00000 1.64TiB 1.25TiB  393GiB 76.59 1.01 193
>> >> 165   hdd 1.63739  1.00000 1.64TiB 1.22TiB  431GiB 74.27 0.98 191
>> >> 163   hdd 1.63739  1.00000 1.64TiB 1.21TiB  440GiB 73.77 0.97 201
>> >> 168   hdd 1.63739  1.00000 1.64TiB 1.24TiB  409GiB 75.63 1.00 193
>> >> 164   hdd 1.63739  1.00000 1.64TiB 1.29TiB  357GiB 78.72 1.04 193
>> >> 169   hdd 1.63739  1.00000 1.64TiB 1.18TiB  471GiB 71.94 0.95 187
>> >> 162   hdd 1.63739  1.00000 1.64TiB 1.22TiB  432GiB 74.21 0.98 180
>> >> 167   hdd 1.63739  1.00000 1.64TiB 1.14TiB  514GiB 69.34 0.91 186
>> >> 161   hdd 1.63739  1.00000 1.64TiB 1.19TiB  460GiB 72.57 0.96 184
>> >> 166   hdd 1.63739  1.00000 1.64TiB 1.21TiB  439GiB 73.82 0.97 187
>> >> 170   hdd 1.63739  1.00000 1.64TiB 1.29TiB  354GiB 78.92 1.04 192
>> >> 176   hdd 1.63739  1.00000 1.64TiB 1.24TiB  408GiB 75.69 1.00 190
>> >> 171   hdd 1.63739  1.00000 1.64TiB 1.15TiB  496GiB 70.43 0.93 182
>> >> 177   hdd 1.63739  1.00000 1.64TiB 1.32TiB  325GiB 80.62 1.06 205
>> >> 172   hdd 1.63739  1.00000 1.64TiB 1.20TiB  451GiB 73.13 0.96 186
>> >> 178   hdd 1.63739  1.00000 1.64TiB 1.25TiB  400GiB 76.14 1.00 188
>> >> 173   hdd 1.63739  1.00000 1.64TiB 1.36TiB  285GiB 82.98 1.09 201
>> >> 179   hdd 1.63739  0.90002 1.64TiB 1.32TiB  327GiB 80.51 1.06 204
>> >> 174   hdd 1.63739  0.95001 1.64TiB 1.31TiB  332GiB 80.19 1.06 197
>> >> 175   hdd 1.63739  1.00000 1.64TiB 1.36TiB  286GiB 82.96 1.09 198
>> >> 180   hdd 1.63739  1.00000 1.64TiB 1.21TiB  433GiB 74.16 0.98 177
>> >> 185   hdd 1.63739  1.00000 1.64TiB 1.26TiB  391GiB 76.70 1.01 198
>> >> 181   hdd 1.63739  1.00000 1.64TiB 1.27TiB  380GiB 77.33 1.02 186
>> >> 186   hdd 1.63739  1.00000 1.64TiB 1.20TiB  451GiB 73.10 0.96 190
>> >> 182   hdd 1.63739  1.00000 1.64TiB 1.31TiB  332GiB 80.20 1.06 204
>> >> 187   hdd 1.63739  1.00000 1.64TiB 1.22TiB  424GiB 74.72 0.98 189
>> >> 183   hdd 1.63739  1.00000 1.64TiB 1.33TiB  318GiB 81.05 1.07 206
>> >> 189   hdd 1.63739  1.00000 1.64TiB 1.08TiB  576GiB 65.66 0.86 169
>> >> 184   hdd 1.63739  1.00000 1.64TiB 1.21TiB  441GiB 73.70 0.97 183
>> >> 188   hdd 1.63739  1.00000 1.64TiB 1.17TiB  474GiB 71.70 0.94 182
>> >> 190   hdd 1.63739  1.00000 1.64TiB 1.27TiB  373GiB 77.75 1.02 195
>> >> 195   hdd 1.63739  1.00000 1.64TiB 1.32TiB  327GiB 80.47 1.06 198
>> >> 191   hdd 1.63739  1.00000 1.64TiB 1.16TiB  484GiB 71.15 0.94 183
>> >> 197   hdd 1.63739  1.00000 1.64TiB 1.28TiB  370GiB 77.94 1.03 197
>> >> 192   hdd 1.63739  1.00000 1.64TiB 1.26TiB  382GiB 77.24 1.02 200
>> >> 196   hdd 1.63739  1.00000 1.64TiB 1.24TiB  402GiB 76.02 1.00 201
>> >> 193   hdd 1.63739  1.00000 1.64TiB 1.24TiB  409GiB 75.59 1.00 186
>> >> 198   hdd 1.63739  1.00000 1.64TiB 1.15TiB  501GiB 70.13 0.92 175
>> >> 194   hdd 1.63739  1.00000 1.64TiB 1.29TiB  353GiB 78.98 1.04 202
>> >> 199   hdd 1.63739  1.00000 1.64TiB 1.34TiB  309GiB 81.58 1.07 221
>> >>                      TOTAL 65.5TiB 49.7TiB 15.8TiB 75.94
>> >> MIN/MAX VAR: 0.86/1.09  STDDEV: 3.92
>> >
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to