Re: NullPointerException in GridDhtPartitionDemander

2018-01-09 Thread ilya.kasnacheev
Hello! I think you should seek ways to avoid creating client for each request. This is slow and dangerous since clients are members of topology and join process is non-trivial. You can have your rebalancing postponed by joining clients, for example. Regards, -- Sent from: http://apache-ignite-

Re: NullPointerException in GridDhtPartitionDemander

2017-12-20 Thread aMark
Hi, A client gets created for each request and client connect to cluster to read the data. Once reading is done, client exits. This explains the high topology version. Though server nodes are not getting created often. We can try Ignite 2.3 in next release but we are close to our release date he

Re: NullPointerException in GridDhtPartitionDemander

2017-12-20 Thread ilya.kasnacheev
Hello! Can you please elaborate why do you have acute number of topology versions (69 in this case)? Can you please describe the life cycle of your topology? Which nodes join, when, which nodes leave, when? Don't you, by any chance, create client nodes for every request or small batch, or even re

Re: NullPointerException in GridDhtPartitionDemander

2017-12-19 Thread aMark
Thanks for the response. It is happening very frequently. Looks like some issue in exchanging partition map in cluster. Getting below failures in all the nodes : 2017-12-19 13:59:43,327 WARN [main] {} org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager - Still waiting

Re: NullPointerException in GridDhtPartitionDemander

2017-12-19 Thread ilya.kasnacheev
Hello! This is a problem not reported previously and it looks non-trivial. Does it happen every time with your configuration or is it random? What causes this rebalancing? Node added? Node removed? Something else? I've noticed you have topVer=613, which is unusually high. Why did you have so man

NullPointerException in GridDhtPartitionDemander

2017-12-19 Thread aMark
Hi, We are using Ignite cache version 2.1 . We are using it in persistent cache in Partitioned Mode having 4 cluster node running. Atomicity mode is ATOMIC, and Rebalance mode is ASYNC while CacheWriteSynchronizationMode is FULL_SYNC. But looks like that when nodes are getting rebalanced then we