Hi,

As per my understanding if the node joins the cluster after loadCache
method on first node is executed then only partitions are moved to new node
as a part of rebalancing and loadCache method is not executed on this node.
But if the node joins before or during the execution of loadCache method on
the first node then loadCache method is executed on 2nd node too and it
fetches all the data and while pushing it to cache(closure.apply),  it just
pushes the data which is relevant to this node and all other data is
discarded.

I am starting the second node after the cacheLoad is completed on first
node(I am starting both nodes on same machine in Intellij). When I start
the second node I see rebalancing started and completed in log file but
after the loadCache method is executed on it and it executes all the sqls
get all the data.

Can someone please advise? Also, I have set the rebalancing mode as ASYNC.
Then why am I seeing mode as SYNC as well as ASYNC in log file. Looks like
rebalancing is happening twice. Can some one please explain this?


My configuration is as follows.

private IgniteConfiguration getIgniteConfiguration(){

    String HOST = "127.0.0.1:47500..47509";
    TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
    ipFinder.setAddresses(Collections.singletonList(HOST));

    TcpDiscoverySpi discoSpi = new TcpDiscoverySpi();
    discoSpi.setIpFinder(ipFinder);

    IgniteConfiguration cfg = new IgniteConfiguration();
    cfg.setDiscoverySpi(discoSpi);
    cfg.setIgniteInstanceName("springDataNode");
    cfg.setPeerClassLoadingEnabled(false);
    cfg.setRebalanceThreadPoolSize(4);

    CacheConfiguration<IPRangeDataKey, IPV4RangeData>
ipv4RangeCacheCfg = new CacheConfiguration<>("IPV4RangeCache");
    ipv4RangeCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
    ipv4RangeCacheCfg.setWriteThrough(false);
    ipv4RangeCacheCfg.setReadThrough(true);
    ipv4RangeCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
    ipv4RangeCacheCfg.setBackups(1);
    Factory<IPV4RangeCacheDataLoader> storeFactory =
FactoryBuilder.factoryOf(IPV4RangeCacheDataLoader.class);
    ipv4RangeCacheCfg.setCacheStoreFactory(storeFactory);

    cfg.setCacheConfiguration(ipv4RangeCacheCfg);
    return cfg;
}


*Log File: *

12:50:20,464 10661 [exchange-worker-#42%springDataNode%] INFO
o.a.i.i.p.c.GridCachePartitionExchangeManager - Rebalancing started
[top=AffinityTopologyVersion [topVer=2, minorTopVer=0], evt=NODE_JOINED,
node=9762f200-2e9e-44bb-948d-84bea83f7327]
12:50:20,464 10661 [exchange-worker-#42%springDataNode%] INFO
o.a.i.i.p.c.d.d.p.GridDhtPartitionDemander - Starting rebalancing [
*mode=SYNC,* fromNode=b368b9f4-716d-4e4b-8966-b9432c11f4f3,
partitionsCount=100, topology=AffinityTopologyVersion [topVer=2,
minorTopVer=0], updateSeq=1]
12:50:20,487 10684 [utility-#51%springDataNode%] INFO
o.a.i.i.p.c.d.d.p.GridDhtPartitionDemander - Completed (final) rebalancing
[fromNode=b368b9f4-716d-4e4b-8966-b9432c11f4f3,
topology=AffinityTopologyVersion [topVer=2, minorTopVer=0], time=31 ms]
12:50:20,488 10685 [utility-#51%springDataNode%] INFO
o.a.i.i.p.c.d.d.p.GridDhtPartitionDemander - Starting rebalancing [
*mode=ASYNC*, fromNode=b368b9f4-716d-4e4b-8966-b9432c11f4f3,
partitionsCount=1024, topology=AffinityTopologyVersion [topVer=2,
minorTopVer=0], updateSeq=1]
12:50:21,109 11306 [sys-#44%springDataNode%] INFO
o.a.i.i.p.c.d.d.p.GridDhtPartitionDemander - Completed (final) rebalancing
[fromNode=b368b9f4-716d-4e4b-8966-b9432c11f4f3,
topology=AffinityTopologyVersion [topVer=2, minorTopVer=0], time=619 ms]
12:50:21,109 11306 [sys-#44%springDataNode%] INFO
o.a.i.i.p.c.d.d.p.GridDhtPartitionDemander - Starting rebalancing
[mode=SYNC, fromNode=b368b9f4-716d-4e4b-8966-b9432c11f4f3,
partitionsCount=509, topology=AffinityTopologyVersion [topVer=2,
minorTopVer=0], updateSeq=1]
12:50:21,119 11316 [sys-#44%springDataNode%] INFO
o.a.i.i.p.c.d.d.p.GridDhtPartitionDemander - Completed (final) rebalancing
[fromNode=b368b9f4-716d-4e4b-8966-b9432c11f4f3,
topology=AffinityTopologyVersion [topVer=2, minorTopVer=0], time=10 ms]

Reply via email to