Hello,
> Sorry for the late reply.
> I have pasted crush map in below url : https://pastebin.com/ASPpY2VB
> and this my osd tree output and this issue are only when i use it with
> filelayout.
could send the output of "ceph osd pool ls detail" please ?
Yoann
> ID CLASS WEIGHTTYPE NAME
Hi,
you didn't really clear things up so I'll just summerarize what I
understood so far. Please also share 'ceph osd pool ls detail' and
'ceph fs status'.
One of the pools is configured with min_size 2 and size 2, this will
pause IO if one node goes down as it's very likely that this node
Sorry for the late reply.
I have pasted crush map in below url : https://pastebin.com/ASPpY2VB
and this my osd tree output and this issue are only when i use it with
filelayout.
ID CLASS WEIGHTTYPE NAME STATUS REWEIGHT PRI-AFF
-1 327.48047 root default
-3 109.16016 hos
Was that a typo and you mean you changed min_size to 1? I/O paus with
min_size 1 and size 2 is unexpected, can you share more details like
your crushmap and your osd tree?
Zitat von Amudhan P :
Behaviour is same even after setting min_size 2.
On Mon 18 May, 2020, 12:34 PM Eugen Block, wr
Behaviour is same even after setting min_size 2.
On Mon 18 May, 2020, 12:34 PM Eugen Block, wrote:
> If your pool has a min_size 2 and size 2 (always a bad idea) it will
> pause IO in case of a failure until the recovery has finished. So the
> described behaviour is expected.
>
>
> Zitat von Amu
If your pool has a min_size 2 and size 2 (always a bad idea) it will
pause IO in case of a failure until the recovery has finished. So the
described behaviour is expected.
Zitat von Amudhan P :
Hi,
Crush rule is "replicated" and min_size 2 actually. I am trying to test
multiple volume con
Hi,
Crush rule is "replicated" and min_size 2 actually. I am trying to test
multiple volume configs in a single filesystem
using file layout.
I have created metadata pool with rep 3 (min_size2 and replicated crush
rule) and data pool with rep 3 (min_size2 and replicated crush rule). and
also I
What’s your pool configuration wrt min_size and crush rules?
Zitat von Amudhan P :
Hi,
I am using ceph Nautilus cluster with below configuration.
3 node's (Ubuntu 18.04) each has 12 OSD's, and mds, mon and mgr are running
in shared mode.
The client mounted through ceph kernel client.
I was