Ivan,
Thanks for the pointer to discussion.
It doesn't actually address the point around the need for
'resetLostPartitions()'. It does point to a ticket that would fix the logic
in it when BLT is used. My concern is that Ignite relies on the user to
call this method at all.
Original message
Looks like this issue has already been filed:
https://issues.apache.org/jira/browse/IGNITE-9181
The actual failing code:
`assert rmtFilterFactory != null;`
Looks like the filter factory is not propagated to the remote node.
Note: When I use setRemoteFilter() (which is now
Resending this to bubble up to the top of inbox. Would be good to hear
opinions on suggested functionality change.
Thanks,
Roman
--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
GitHub user novicr opened a pull request:
https://github.com/apache/ignite/pull/5528
Continuous query node restart
Add a test showing there is a problem with setting remote filter factory on
continuous query.
Steps to reproduce:
1. Start 4 node cluster
2. Create
I was going over failure recovery scenarios, trying to understand logic
behind lost partitions functionality. In the case of native persistence,
Ignite fully manages data persistence and availability. If enough nodes in
the cluster become unavailable resulting in partitions marked lost, Ignite
ke