Hi Gregory,

> So I would need a function that returns a node id from a Cache Key then a 
> function returning a node of the cluster give its id
I don’t quiet get how it getting a node for a key fits here (you’d need to know 
some key then),
but ignite.affinity().mapNodeToKey() does this.

How about something like this:
        ClusterGroup neighborGroup = 
ignite.cluster().forServers().forHost(ignite.cluster().localNode());
        Collection<?> data = ignite.compute(neighborGroup).call(
            () -> {
                QueryCursor<?> cursor = 
Ignition.localIgnite().cache("foo").query(new ScanQuery<>().setLocal(true));
                return collectData(cursor);
            }
        );

Here you find your neighbor via  cluster API, then  send a job executing a 
local query and returning all the data.

Thanks,
Stan

From: Grégory Jevardat de Fombelle
Sent: 10 июля 2018 г. 10:42
To: [email protected]
Subject: Physical colocation of Ignite nodes in different JVM's

Hello

On one hand I have a cluster of Ignite Server nodes target to store a 
partitioned cache. 
On the other hand I have some "legacy" compute code running in its own JVM on 
the same nodes as Ignite nodes.

I would like to integrate in this compute JVM's an Ignite client joining the 
cluster with the constraint that each client will only get cached data from the 
physically colocated server node, i.e the client ignite jvm will only get 
cached data from the server jvm on the same machine.


So I would need a function that returns a node id from a Cache Key then a 
function returning a node of the cluster give its id. And then I would call 
something like clusterNode_X.getCache("myCache").fetchAllData()

Note that as a first approach I won't handle backups and failure but it you 
have easy solution for that I take it.

I would really appreciate some links or examples on how to do this. The option 
to use the affinityRun and affinityCall is not realistic now, we will migrate 
to this solution later in the future.

Thanks for any help
Gregory 

Reply via email to