Hi,
Thanks to your answers now I understand better this part of ceph. I did the
change on the crushmap that Maxime suggested, after that the results are
what I expect from the beginning:
# ceph osd df
ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
0 7.27100 1.0 7445G 1830G 5614G 2
Hi Félix,
Changing the failure domain to OSD is probably the easiest option if this
is a test cluster. I think the commands would go like:
- ceph osd getcrushmap -o map.bin
- crushtool -d map.bin -o map.txt
- sed -i 's/step chooseleaf firstn 0 type host/step chooseleaf firstn 0
type osd/' map.txt
Yet another option is to change the failure domain to OSD instead host
(this avoids having to move disks around and will probably meet you initial
expectations).
Means your cluster will become unavailable when you loose a host until you
fix it though. OTOH you probably don't have too much leeway an
If you want to resolve your issue without purchasing another node, you
should move one disk of each size into each server. This process will be
quite painful as you'll need to actually move the disks in the crush map to
be under a different host and then all of your data will move around, but
then
On 06/05/2017 02:48 PM, Christian Balzer wrote:
>
> Hello,
>
> On Mon, 5 Jun 2017 13:54:02 +0200 Félix Barbeira wrote:
>
>> Hi,
>>
>> We have a small cluster for radosgw use only. It has three nodes, witch 3
> ^ ^
>> osds each. Each node
Hi Félix,
Could you please send me the output of the "ceph report" command (privately,
the output is likely too big for the list) ? I suspect what you're seeing is
because the smaller disks have more PGs than they should for the
default.rgw.buckets.data pool. With the output of "ceph report" an
Hello,
On Mon, 5 Jun 2017 13:54:02 +0200 Félix Barbeira wrote:
> Hi,
>
> We have a small cluster for radosgw use only. It has three nodes, witch 3
^ ^
> osds each. Each node has different disk sizes:
>
There's your answer, staring you r
Hi,
We have a small cluster for radosgw use only. It has three nodes, witch 3
osds each. Each node has different disk sizes:
node01 : 3x8TB
node02 : 3x2TB
node03 : 3x3TB
I thought that the weight handle the amount of data that every osd receive.
In this case for example the node with the 8TB dis