Hi,

Please attach the full log as well. Also, make it sure that you don't use
any kind of node filters or backup filters when starting a cache. Pay
attention to this especially if you are using custom affinity function
settings or have enabled "excludeNeighbors" and you are running nodes on
the same host machine.

Running 'control.sh --cache distribution null' and checking the output for
given cache (the one that you are experiencing issues with) should also
shed some light on the reason of lost partitions being detected -- you will
have the whole partition distribution map printed and you will see which
node is primary for the Nth partition and which one is backup.

Alternatively, you may also grep your logs for the message "Local node
affinity assignment distribution is not ideal", this will immediately let
you know that the full topology is not properly configured to hold all the
necessary backups.

Best regards,
Anton

пт, 8 окт. 2021 г. в 12:32, Stephen Darlington <
stephen.darling...@gridgain.com>:

> Can you share your configuration?
>
> On 8 Oct 2021, at 09:51, 常鑫 <xin.ch...@intotech.com.cn> wrote:
>
> Hi All,
>    We are using Ignite 2.10.0 and we have  a question about  Partition
> Backups
>
> I configured 1 backup on copies each cache,and I started 3 nodes. But when
> I stop one node of the cluster,some partitions was lost.  Why does this
> happen?
> Here is the log:
>  <temp4cj.png>
> By the way ,the baseline cannot be changed, so that a single node cannot
> be restarted after all three nodes are stopped.
> ---------
>
> Thanks & Regards,
> Xin Chang
>
>
>
>

Reply via email to