Raymond, It sounds like you want to run certain computation on every data update in the cache, is that right?
To achieve that you can use local continuous queries, but: - Remote filter would be executed on both primary and backup, so computation will be executed more than once. - You can filter out by primary flag so that execution happens once, but in this case there is a chance some computation would NEVER execute in case of failure. One of the ways to go around that is to have a 'status' field in the cache object that indicates whether computation for this object was completed or not. Then, if one of the nodes dies, you can run a query to select all unfinished jobs and resubmit them. This way duplicated computation would happen only when topology changes, which on stable topology executions will happen only on primary nodes. -Val -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
