Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's

2018-09-03 Thread Marc Roos
ewly added node has finished. -Original Message- From: Jack [mailto:c...@jack.fr.eu.org] Sent: zondag 2 september 2018 15:53 To: Marc Roos; ceph-users Subject: Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's Well, you have more than one pool here pg_num =

Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's

2018-09-02 Thread Jack
l Message- > From: Jack [mailto:c...@jack.fr.eu.org] > Sent: zondag 2 september 2018 14:06 > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] 3x replicated rbd pool ssd data spread across > 4 osd's > > ceph osd df will get you more information: variation

Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's

2018-09-02 Thread Marc Roos
Original Message- From: Jack [mailto:c...@jack.fr.eu.org] Sent: zondag 2 september 2018 14:06 To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's ceph osd df will get you more information: variation & pg number for each OSD Cep

Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's

2018-09-02 Thread Jack
ceph osd df will get you more information: variation & pg number for each OSD Ceph does not spread object on a per-object basis, but on a pg-basis The data repartition is thus not perfect You may increase your pg_num, and/or use the mgr balancer module (http://docs.ceph.com/docs/mimic/mgr/balance

[ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's

2018-09-02 Thread Marc Roos
If I have only one rbd ssd pool, 3 replicated, and 4 ssd osd's. Why are these objects so unevenly spread across the four osd's? Should they all not have 162G? [@c01 ]# ceph osd status 2>&1 ++--+---+---++-++-+- --+ | id | host | used | a