On 10/06/2013 02:53 PM, Ирек Фасихов wrote:
http://ceph.com/docs/master/rados/operations/placement-groups/
i have read this page before, and read it again... but i have missed
your point, a hint maybe :)
however the ceph df now looks like this:
# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
11178G 3311G 7444G 66.60
POOLS:
NAME ID USED %USED OBJECTS
data 0 0 0 0
metadata 1 40122K 0 30
rbd 2 3704G 33.14 478583
the raw usage is **66.6%** and the rbd usage is still 3704GB (we did not
delete any thing, except of some files inside the rbd image) which
confirms my calculations and also confirms that there was something
leaked and it was triggered either by the change of the neerfull_ratio
or the restart of the monitor and/or OSDs
any one have an idea how to track this leak if it happens again? or may
be find some traces of it
2013/10/5 Linux Chips <linux.ch...@gmail.com
<mailto:linux.ch...@gmail.com>>
Hi every one;
we have a small testing cluster, one node with 4 OSDs of 3TB each.
i created one RBD image of 4TB. now the cluster is nearly full:
# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
11178G 1783G 8986G 80.39
POOLS:
NAME ID USED %USED OBJECTS
data 0 0 0 0
metadata 1 40100K 0 30
rbd 2 3703G 33.13 478583
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cephtest1-root 181G 19G 153G 11% /
udev 48G 4.0K 48G 1% /dev
tmpfs 19G 592K 19G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 48G 0 48G 0% /run/shm
/dev/sde1 228M 27M 189M 13% /boot
/dev/sda 2.8T 2.1T 566G 79%
/var/lib/ceph/osd/ceph-0
/dev/sdb 2.8T 2.4T 316G 89%
/var/lib/ceph/osd/ceph-1
/dev/sdc 2.8T 2.2T 457G 84%
/var/lib/ceph/osd/ceph-2
/dev/sdd 2.8T 2.2T 447G 84%
/var/lib/ceph/osd/ceph-3
# rbd list -l
NAME SIZE PARENT FMT PROT LOCK
share2 3906G 1
# rbd info share2
rbd image 'share2':
size 3906 GB in 500000 objects
order 23 (8192 KB objects)
block_name_prefix: rb.0.1056.2ae8944a
format: 1
# ceph osd pool get rbd min_size
min_size: 1
# ceph osd pool get rbd size
size: 2
4 disk at 3TB should give me 12TB, and 4TBx2 should be 8TB. that
is 66% not 80% as the ceph df shows (%RAW).
where is this space is leaking? how can i fix it?
or is this normal behavior and this is due to overhead?
thanks
Ali
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com