On 6.12.19 17:01, Aleksey Gutikov wrote:
On 6.12.19 14:57, Jochen Schulz wrote:
Hi!
Thank you!
The output of both commands are below.
I still dont understand why there are 21T used data (because 5.5T*3 =
16.5T != 21T) and why there seems to be only 4.5 T MAX AVAIL, but the
osd output tells we
On 6.12.19 14:57, Jochen Schulz wrote:
Hi!
Thank you!
The output of both commands are below.
I still dont understand why there are 21T used data (because 5.5T*3 =
16.5T != 21T) and why there seems to be only 4.5 T MAX AVAIL, but the
osd output tells we have 25T free space.
As I know MAX AVAIL
On 6.12.19 13:29, Jochen Schulz wrote:
Hi!
We have a ceph cluster with 42 OSD in production as a server providing
mainly home-directories of users. Ceph is 14.2.4 nautilus.
We have 3 pools. One images (for rbd images) a cephfs_metadata and a
cephfs_data pool.
Our raw data is about 5.6T. All po
According to my understanding, osd's heartbeat partners only come from
those osds who assume the same pg
Hello,
That was my initial assumption too.
But according to my experience set of heartbeat peers include pg peers
and some other osds.
Actually it contains:
- pg peers
- next and previo
On 22.11.19 23:45, Paul Emmerich wrote:
tools), it means no mapping could be found; check your crush map and
crush rule
Most simple way to get into this state is to change OSDs' reweight on
small cluster where number of OSDs equal to EC n+k.
I do not know exactly, but seems that straw2 crush a