Thanks, Denis.

I read the docs and also followed this short video demo/tutorial:

https://www.youtube.com/watch?v=FKS8A86h-VY
<https://www.youtube.com/watch?v=FKS8A86h-VY>  

It worked as described, so I then extended the exercise by adding and
additional (3rd) node, using the persistent-store configuration on all nodes
and activating the cluster before transacting...as per the docs.

Again it all worked as per the original demo, but the persistence now also
worked.

My concern and confusion is the following:

Assuming we start 3 nodes...

 - if we bring down the first node (Primary?) - availability of the data is
lost ... even though there are another two active nodes in the cluster.
Doesn't the system automatically elect a new Primary if there are enough
active terminals to maintain a replicated backup? How then do we achieve
'fault-tolerance' and 'high-availability'?

- if we bring down all but the first node (Primary?) data access continues
to be available for review and manipulation. But surely this should now fail
because there is no active 'Secondary' node to backup or replicate any
changes to? Doesn't this expose a risk to data 'consistency', as there is
now no backup of the data changes?

Or is there a way to configure things so that the system will behave as
expected (above)?

Jose



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to