Hello Ceph Users,
We have added more ssd storage to our ceph cluster last night. We added 4 x 1TB
drives and the available space went from 1.6TB to 0.6TB ( in `ceph df` for the
SSD pool ).
I would assume that the weight needs to be changed but I didn't think I would
need to? Should I change t
Thanks for the reply! - it ended being that the HDD pool in this server is
larger than the other servers. This increases the server's weight and therefore
the SSD pool in this server is affected.
I will add more SSDs to this server to keep the ratio of HDDs to SSDs the same
across all hosts.
Ki
Something funny going on with your new disks:
138 ssd 0.90970 1.0 931G 820G 111G 88.08 2.71 216 Added
139 ssd 0.90970 1.0 931G 771G 159G 82.85 2.55 207 Added
140 ssd 0.90970 1.0 931G 709G 222G 76.12 2.34 197 Added
141 ssd 0.90970 1.0 931G 664G 267G 71.31 2
osd.78up 1.0 1.0
79 ssd 0.54579 osd.79up 1.0 1.0
Kind regards,
Glen Baars
From: Shawn Iverson
Sent: Saturday, 21 July 2018 9:21 PM
To: Glen Baars
Cc: ceph-users
Subject: Re: [ceph-users] 12.2.7 - Available space decreasing
Glen,
Correction...looked at the wrong column for weights, my bad...
I was looking at the wrong column for weight. You have varying weights,
but the process is still the same. Balance your buckets (hosts) in your
crush map, and balance your osds in each bucket (host).
On Sat, Jul 21, 2018 at 9
Glen,
It appears you have 447G, 931G, and 558G disks in your cluster, all with a
weight of 1.0. This means that although the new disks are bigger, they are
not going to be utilized by pgs any more than any other disk.
I would suggest reweighting your other disks (they are smaller), so that
you b