I'm taking a look at erasure coded pools in (ceph 0.78-336-gb9e29ca).
I'm doing a simple test where I use 'rados put' to load a 1G file into
an erasure coded pool, and then 'rados rm' to remove it later. Checking
with 'rados df' shows no objects in the pool and no KB, but the object
space is still allocated on the osds (ceph -s and host df show this). Is
this expected? Is there a garbage collection step I need to do?
Regards
Mark
Details:
$ ceph -v
ceph version 0.78-336-gb9e29ca (b9e29caff37e9ce791bdda8ecd5623d66225c7f6)
$ uname -a # 4 vms running this version
Linux ceph2 3.11.0-19-generic #33-Ubuntu SMP Tue Mar 11 18:48:34 UTC
2014 x86_64 x86_64 x86_64 GNU/Linux
$ ceph osd tree
# id weight type name up/down reweight
-1 0.03998 root default
-2 0.009995 host ceph1
0 0.009995 osd.0 up 1
-3 0.009995 host ceph2
1 0.009995 osd.1 up 1
-4 0.009995 host ceph3
2 0.009995 osd.2 up 1
-5 0.009995 host ceph4
3 0.009995 osd.3 up 1
$ ceph osd erasure-code-profile set profile1 \
k=2 m=2 ruleset-failure-domain=osd
$ ceph osd erasure-code-profile get profile1
directory=/usr/lib/ceph/erasure-code
k=2
m=2
plugin=jerasure
ruleset-failure-domain=osd
technique=reed_sol_van
$ ceph osd pool create ecpool 64 64 erasure \
profile1 ecrulset
$ ceph osd dump|grep ecpool
pool 5 'ecpool' erasure size 4 min_size 2 crush_ruleset 3 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 67 owner 0 flags hashpspool
stripe_width 4096
$ rados put -p ecpool file file # 1G file
$ rados df|grep ecpool
ecpool - 1048576 1 0
0 0 0
[ df stats per osd ]
osd | mbytes
-----+--------
0 | 552
1 | 552
2 | 550
3 | 550
$ rados rm -p ecpool file
$ rados df|grep ecpool
ecpool - 0 0 0
0 0 0 0 257 1048576
[ df stats per osd ]
osd | mbytes
-----+--------
0 | 552
1 | 552
2 | 550
3 | 550
$ tail crush.txt
rule ecrulset {
ruleset 3
type erasure
min_size 3
max_size 20
step set_chooseleaf_tries 5
step take default
step choose indep 0 type osd
step emit
}
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com