Hello all,
First time poster to ceph-users here.:)
Ceph version is: ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
Can someone tell me when I have this replication rule:
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
       step chooseleaf firstn 0 type host
        step emit

And here is the weight:
# id    weight  type name       up/down reweight
-1      16.68   root default
-2      5.79            host storage1
0       1.99                    osd.0   up      1
1       1.99                    osd.1   up      1
2       1.81                    osd.2   up      1
-3      10.89           host storage2
3       3.63                    osd.3   up      1
5       3.63                    osd.5   up      1


I expect to see equal spread of the data across storage1 and storage2, and the 
primary PGs on storage2.....
But I see:

Storage1:
/dev/sda1       1.9T  1.3T  599G  68% /var/lib/ceph/osd/ceph-2
/dev/sdc1       2.0T  1.6T  448G  79% /var/lib/ceph/osd/ceph-0
/dev/sdd1       2.0T  1.8T  278G  87% /var/lib/ceph/osd/ceph-1

Storage2:
/dev/sdc1       3.7T  1.8T  1.9T  50% /var/lib/ceph/osd/ceph-3
/dev/sde1       3.7T  1.8T  2.0T  48% /var/lib/ceph/osd/ceph-5


That is ~4.7Gb to ~3.6 GB......
What do I understand wrong ?

Keeping in mind that this is in the middle of rebalancing but it is at its end:
4338 GB data, 8216 GB used, 5161 GB / 13377 GB avail; 809 kB/s rd, 609 kB/s wr, 
97 op/s; 121935/2231204 objects degraded (5.465%)


Will it rebalance the data later after those 5.5% are done ?


Regards.

Dimitar Boichev
SysAdmin Team Lead
AXSMarine Sofia
Phone: +359 889 22 55 42
Skype: dimitar.boichev.axsmarine
E-mail: dimitar.boic...@axsmarine.com<mailto:dimitar.boic...@axsmarine.com>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to