Hello, 
I’m sorry for late response, here is output:

ceph df
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
    hdd       1.0 PiB     797 TiB     267 TiB      272 TiB         25.44
    TOTAL     1.0 PiB     797 TiB     267 TiB      272 TiB         25.44

POOLS:
    POOL                                       ID     STORED      OBJECTS     
USED        %USED     MAX AVAIL
    .rgw.root                                   1     7.3 KiB          52     
4.5 MiB         0       222 TiB
    default.rgw.control                         2         0 B          25       
  0 B         0       222 TiB
    default.rgw.meta                            3      15 KiB         152      
10 MiB         0       222 TiB
    default.rgw.log                             4     1.8 KiB         631     
386 KiB         0       222 TiB
    default.rgw.buckets.index                   6      71 MiB          37      
71 MiB         0       222 TiB
    default.rgw.buckets.data                    7     3.3 TiB       1.59M     
9.9 TiB      1.46       222 TiB
    default.rgw.buckets.non-ec                  9       962 B          10     
385 KiB         0       222 TiB
    CephFS                                     10         0 B           0       
  0 B         0       445 TiB
    production-repo-old.rgw.buckets.non-ec     15         0 B           0       
  0 B         0       222 TiB
    production-repo-old.rgw.buckets.index      21      54 GiB           2      
54 GiB         0       222 TiB
    production-repo-old.rgw.buckets.data       22      75 TiB     266.03M     
257 TiB     27.82       400 TiB


ceph osd pool ls detail
pool 1 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash 
rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 19535 lfor 
0/19535/19533 flags hashpspool stripe_width 0 pg_num_min 8 application rgw
pool 2 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 19427 
lfor 0/19427/19423 flags hashpspool stripe_width 0 pg_num_min 32 application rgw
pool 3 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash 
rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 19647 lfor 
0/19647/19645 flags hashpspool stripe_width 0 pg_num_min 8 application rgw
pool 4 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash 
rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 15480 lfor 
0/15480/15449 flags hashpspool stripe_width 0 pg_num_min 8 application rgw
pool 6 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 
object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 15508 
lfor 0/15508/15449 flags hashpspool stripe_width 0 pg_num_min 8 application rgw
pool 7 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 
object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode on last_change 20549 
lfor 0/19147/19152 flags hashpspool max_bytes 10995116277760 stripe_width 0 
pg_num_min 256 application rgw
pool 9 'default.rgw.buckets.non-ec' replicated size 3 min_size 2 crush_rule 0 
object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 16099 
lfor 0/16099/16097 flags hashpspool stripe_width 0 pg_num_min 8 application rgw
pool 10 'CephFS' erasure size 3 min_size 2 crush_rule 1 object_hash rjenkins 
pg_num 8 pgp_num 8 autoscale_mode on last_change 15718 lfor 0/15718/15716 flags 
hashpspool stripe_width 8192 pg_num_min 8 application cephfs,rbd,rgw
pool 15 'production-repo-old.rgw.buckets.non-ec' replicated size 3 min_size 2 
crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on 
last_change 20404 flags hashpspool stripe_width 0 application rgw
pool 21 'production-repo-old.rgw.buckets.index' replicated size 3 min_size 2 
crush_rule 0 object_hash rjenkins pg_num 4 pgp_num 4 autoscale_mode on 
last_change 20547 flags hashpspool stripe_width 0 application rgw
pool 22 'production-repo-old.rgw.buckets.data' erasure size 10 min_size 7 
crush_rule 2 object_hash rjenkins pg_num 512 pgp_num 512 autoscale_mode on 
last_change 20546 lfor 0/20540/20542 flags hashpspool stripe_width 24576 
pg_num_min 512 application rgw

Best regards
Mateusz Skała


> Wiadomość napisana przez Eric Smith <eric.sm...@vecima.com> w dniu 
> 18.07.2020, o godz. 11:46:
> 
> EXTERNAL EMAIL - Do not click any links or open any attachments unless you 
> trust the sender and know the content is safe.
> 
> Can you post the output of a couple of commands:
> 
> ceph df
> ceph osd pool ls detail
> 
> Then we can probably explain the utilization you're seeing.
> 
> -----Original Message-----
> From: Mateusz Skała <mateusz.sk...@gmail.com> 
> Sent: Saturday, July 18, 2020 1:35 AM
> To: ceph-users@ceph.io
> Subject: [ceph-users] EC profile datastore usage - question
> 
> Hello Community,
> I would like to ask about help in explanation situation.
> There is Rados gateway with EC pool profile k=6 m=4. So it shoud take 
> something about 1.4 - 2.0 data usage  more from raw data if I’m correct.
> Rados df shows me:
> 116 TiB used and WR 26 TiB
> Can You explain this? It is about 4.5*WR used data. Why?
> Regards
> Mateusz Skała
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
> ceph-users-le...@ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to