Also, I suppose another option would be to handle the event by telling our load balancer to take the node out of the cluster but then have Ignite reattempt to reconnect, but I have no idea how I could tell Ignite how to do that.
Ralph > On Jan 18, 2018, at 9:10 AM, Ralph Goers <ralph.go...@dslextreme.com> wrote: > > Thanks for the info. > > JVM_RESTART won’t work for us because this a web application running in > Tomcat using Ignite for a distributed cache. JVM_RESTART is documented as > only working in a command line application. > > The problem we experienced was bad since our load balancer didn’t realize one > of the nodes was effectively dead because the health checks didn’t realize > Ignite had stopped. The documentation for the SegmentationPolicy mentions > that events are sent to a listener. If I understand correctly we need to call > ignite.events().localListen() to listen for the shutdown and then cause our > health detection to respond with a failure so we can restart the tomcat. Is > this correct? > > Ralph > > > > > >> On Jan 18, 2018, at 7:15 AM, ilya.kasnacheev <ilya.kasnach...@gmail.com> >> wrote: >> >> Hello! >> >> Maybe it's network problems and not full GC. As you can see from logs >> there's failure to acknowledge a discovery message. >> >> As for "re-discovery" - as far as my understanding goes, that't not how >> Ignite works. Client nodes will indeed try to re-discover server, but server >> nodes consider themself "segmented". After that they *could* proceed working >> solo, but by default they won't. This is governed by segmentation policy in >> Ignite configuration. And by default it is "stop", i.e., shutdown node. >> >> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/plugin/segmentation/SegmentationPolicy.html >> >> There's a possibility of RESTART_JVM and that's the one you should probably >> set for the behaviour you desire. >> >> Regards, >> >> >> >> -- >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >> > > >