Hello!
I think you should seek ways to avoid creating client for each request. This
is slow and dangerous since clients are members of topology and join process
is non-trivial. You can have your rebalancing postponed by joining clients,
for example.
Regards,
--
Sent from: http://apache-ignite-
Hi,
A client gets created for each request and client connect to cluster to read
the data. Once reading is done, client exits. This explains the high
topology version.
Though server nodes are not getting created often.
We can try Ignite 2.3 in next release but we are close to our release date
he
Hello!
Can you please elaborate why do you have acute number of topology versions
(69 in this case)? Can you please describe the life cycle of your topology?
Which nodes join, when, which nodes leave, when?
Don't you, by any chance, create client nodes for every request or small
batch, or even re
Thanks for the response.
It is happening very frequently. Looks like some issue in exchanging
partition map in cluster.
Getting below failures in all the nodes :
2017-12-19 13:59:43,327 WARN [main] {}
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager
- Still waiting
Hello!
This is a problem not reported previously and it looks non-trivial.
Does it happen every time with your configuration or is it random?
What causes this rebalancing? Node added? Node removed? Something else?
I've noticed you have topVer=613, which is unusually high. Why did you have
so man
Hi,
We are using Ignite cache version 2.1 . We are using it in persistent cache
in Partitioned Mode having 4 cluster node running. Atomicity mode is
ATOMIC, and Rebalance mode is ASYNC while CacheWriteSynchronizationMode is
FULL_SYNC.
But looks like that when nodes are getting rebalanced then we