Re: [ceph-users] handling different disk sizes

2017-06-06 Thread Félix Barbeira
Hi, Thanks to your answers now I understand better this part of ceph. I did the change on the crushmap that Maxime suggested, after that the results are what I expect from the beginning: # ceph osd df ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS 0 7.27100 1.0 7445G 1830G 5614G 2

Re: [ceph-users] handling different disk sizes

2017-06-06 Thread Maxime Guyot
Hi Félix, Changing the failure domain to OSD is probably the easiest option if this is a test cluster. I think the commands would go like: - ceph osd getcrushmap -o map.bin - crushtool -d map.bin -o map.txt - sed -i 's/step chooseleaf firstn 0 type host/step chooseleaf firstn 0 type osd/' map.txt

Re: [ceph-users] handling different disk sizes

2017-06-05 Thread Christian Wuerdig
Yet another option is to change the failure domain to OSD instead host (this avoids having to move disks around and will probably meet you initial expectations). Means your cluster will become unavailable when you loose a host until you fix it though. OTOH you probably don't have too much leeway an

Re: [ceph-users] handling different disk sizes

2017-06-05 Thread David Turner
If you want to resolve your issue without purchasing another node, you should move one disk of each size into each server. This process will be quite painful as you'll need to actually move the disks in the crush map to be under a different host and then all of your data will move around, but then

Re: [ceph-users] handling different disk sizes

2017-06-05 Thread Loic Dachary
On 06/05/2017 02:48 PM, Christian Balzer wrote: > > Hello, > > On Mon, 5 Jun 2017 13:54:02 +0200 Félix Barbeira wrote: > >> Hi, >> >> We have a small cluster for radosgw use only. It has three nodes, witch 3 > ^ ^ >> osds each. Each node

Re: [ceph-users] handling different disk sizes

2017-06-05 Thread Loic Dachary
Hi Félix, Could you please send me the output of the "ceph report" command (privately, the output is likely too big for the list) ? I suspect what you're seeing is because the smaller disks have more PGs than they should for the default.rgw.buckets.data pool. With the output of "ceph report" an

Re: [ceph-users] handling different disk sizes

2017-06-05 Thread Christian Balzer
Hello, On Mon, 5 Jun 2017 13:54:02 +0200 Félix Barbeira wrote: > Hi, > > We have a small cluster for radosgw use only. It has three nodes, witch 3 ^ ^ > osds each. Each node has different disk sizes: > There's your answer, staring you r

[ceph-users] handling different disk sizes

2017-06-05 Thread Félix Barbeira
Hi, We have a small cluster for radosgw use only. It has three nodes, witch 3 osds each. Each node has different disk sizes: node01 : 3x8TB node02 : 3x2TB node03 : 3x3TB I thought that the weight handle the amount of data that every osd receive. In this case for example the node with the 8TB dis