Rick,

Looks like ok.
You run 2 nodes, then you kill one and other node report that killed node
was dropped from grid.

What the issue is?

On Thu, Jan 25, 2018 at 12:38 PM, <linr...@itri.org.tw> wrote:

> Hi Andrey,
>
>
>
> 1.      There are no other running nodes when I triggered the two nodes.
>
>
>
> 2.      If I firstly triggered the One node(shell script), and then
> triggered the other node(maven.project.java).
>
> I  closed the other node(maven.project.java) and *the One node was still
> running*. The proram result of the One node show that:
>
>
>
> [25-Jan-2018 17:32:25][WARN ][tcp-disco-msg-worker-#2%null%][TcpDiscoverySpi]
> Local node has detected failed nodes and started cluster-wide procedure. To
> speed up failure detection please see 'Failure Detection' section under
> javadoc for 'TcpDiscoverySpi'
>
>
>
> [25-01-2018 17:32:25][INFO 
> ][disco-event-worker-#28%null%][GridDiscoveryManager]
> Added new node to topology: TcpDiscoveryNode 
> [id=664c870e-6b93-4328-a95b-9e04d5b4f59c,
> addrs=[0:0:0:0:0:0:0:1%lo,  127.0.0.1], sockAddrs=[ubuntu/ 127.0.0.1:47501,
> /0:0:0:0:0:0:0:1%lo:47501, /127.0.0.1:47501], discPort=47501, order=10,
> intOrder=6, lastExchangeTime=1516872738417, loc=false,
> ver=1.9.0#20170302-sha1:a8169d0a, isClient=false]
>
>
>
> [25-01-2018 17:32:25][INFO 
> ][disco-event-worker-#28%null%][GridDiscoveryManager]
> **Topology snapshot [ver=10, servers=2, clients=0, CPUs=4, heap=4.5GB]**
>
>
>
> [25-Jan-2018 17:32:25][WARN 
> ][disco-event-worker-#28%null%][GridDiscoveryManager]
> Node FAILED: TcpDiscoveryNode [id=664c870e-6b93-4328-a95b-9e04d5b4f59c,
> addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1], sockAddrs=[ubuntu/127.0.0.1:47501,
> /0:0:0:0:0:0:0:1%lo:47501, /127.0.0.1:47501], discPort=47501, order=10,
> intOrder=6, lastExchangeTime=1516872738417, loc=false,
> ver=1.9.0#20170302-sha1:a8169d0a, isClient=false]
>
>
>
> [25-01-2018 17:32:25][INFO 
> ][disco-event-worker-#28%null%][GridDiscoveryManager]
> Topology snapshot *[ver=11, servers=1, clients=0, CPUs=4, heap=1.0GB]*
>
>
>
> Rick
>
>
>
> *From:* Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
> *Sent:* Thursday, January 25, 2018 5:10 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: One problem about Cluster Configuration(cfg)
>
>
>
> Hi Rick,
>
>
>
> Do you have a luck to resolve this?
>
> Or you still observe the issue when configuring ipFinder via API?
>
>
>
> On Thu, Jan 25, 2018 at 11:29 AM, <linr...@itri.org.tw> wrote:
>
> Hi all,
>
>
>
> By the way, I run two nodes on localhost, and the multicastGroup ip and
> port are default settings in the example-cache.xml, as:
>
> ============================================================
> ===================================================
>
> <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.
> TcpDiscoveryMulticastIpFinder">
>
>                   <property name ="multicastGroup" value="228.1.2.4"/>
>
>                   <property name="addresses">
>
>                       <list>
>
>                         <!--In distributed environment, replace with
> actual host IP address. -->
>
>                         <value>127.0.0.1:47500..47509</value>
>
>                       </list>
>
>                   </property>
>
>                 </bean>
>
> ============================================================
> ===================================================
>
>
>
> Rick
>
>
>
> *From:* linr...@itri.org.tw [mailto:linr...@itri.org.tw]
> *Sent:* Thursday, January 25, 2018 3:51 PM
> *To:* user@ignite.apache.org
> *Subject:* One problem about Cluster Configuration(cfg)
>
>
>
> Hi all,
>
>
>
> I have tried to construct a cluster with two nodes.
>
>
>
> my run environment ==============================
> ================================================================
>
> OS: Ubuntu 14.04.5 LTS
>
> Java version: 1.7
>
> Ignite version: 1.9.0
>
> ============================================================
> ===================================================
>
>
>
> One node with a “example-cache.xml” was triggered by the shell script as
> the following command:./bin/ignite.sh config/example-cache.xml
>
> The execution results of the program is as:
>
> shell script result ==============================
> ==================================================================
>
> Local node [ID=D411C309-E56A-4773-ABD1-132ADE62C325, order=1,
> clientMode=false]
>
> *Local node addresses: [ubuntu/0:0:0:0:0:0:0:1%lo, /127.0.0.1
> <http://127.0.0.1>]*
>
> *Local ports: TCP:8080 TCP:11211 TCP:47100 UDP:47400 TCP:47500*
>
>
>
> [25-01-2018 15:23:44][INFO ][main][GridDiscoveryManager] Topology snapshot
> [*ver=1, servers=1, clients=0, CPUs=4, heap=1.0GB*]
>
> [25-01-2018 15:23:48][INFO ][Thread-23][G] Invoking shutdown hook...
>
> [25-01-2018 15:23:48][INFO ][Thread-23][GridTcpRestProtocol] Command
> protocol successfully stopped: TCP binary
>
> [25-01-2018 15:23:48][INFO ][Thread-23][GridJettyRestProtocol] Command
> protocol successfully stopped: Jetty REST
>
> [25-01-2018 15:23:48][INFO ][Thread-23][GridCacheProcessor] Stopped
> cache: *oneCache*
>
> ============================================================
> ===================================================
>
>
>
> The other node was triggered by the maven project (java 1.7) as the
> following command: mvn compile exec:java -Dexec.mainClass=…
>
> In addition, my java code is as:
>
> Java code ============================================================
> ==========================================
>
> TcpDiscoveryMulticastIpFinder ipFinder = *new*
> TcpDiscoveryMulticastIpFinder();
>
>
>
> TcpDiscoverySpi spi = *new* TcpDiscoverySpi();
>
> spi.setIpFinder(ipFinder);
>
>
>
> IgniteConfiguration cfg  = *new* IgniteConfiguration();
>
>
>
> cfg.setClientMode(*false*);
>
>
>
> cfg.setDiscoverySpi(spi);
>
>
>
> Ignite igniteVar = Ignition.getOrS*tart*(cfg);
>
>
>
> *CacheConfiguration* cacheConf = *new* *CacheConfiguration*();
>
> cacheConf.setName("oneCache");
>
> *cacheConf**.setIndexedTypes(String.**class**, String.**class**)*;
>
> *IgniteCache* cache = *igniteCache**.getOrCreateCache(**cacheConf**)*;
>
> ============================================================
> ===================================================
>
>
>
> The execution results of the java program is as:
>
> Maven project(java) result ==============================
> ===========================================================
>
> SLF4J: Class path contains multiple SLF4J bindings.
>
> SLF4J: Found binding in [jar:file:/root/.m2/repository/org/slf4j/slf4j-
> log4j12/1.7.25/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/
> StaticLoggerBinder.class]
>
> SLF4J: Found binding in [jar:file:/root/.m2/repository/org/slf4j/slf4j-
> jdk14/1.7.25/slf4j-jdk14-1.7.25.jar!/org/slf4j/impl/
> StaticLoggerBinder.class]
>
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
>
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>
> *Program execution lock….*
>
> ============================================================
> ===================================================
>
>
>
> And, If I closed One node(shell script), the Maven project program started
> running, as:
>
> ============================================================
> ===================================================
>
> [15:32:13] Performance suggestions for grid  (fix if possible)
>
> [15:32:13] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
>
> [15:32:13]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
> options)
>
> [15:32:13]   ^-- Specify JVM heap max size (add '-Xmx<size>[g|G|m|M|k|K]'
> to JVM options)
>
> [15:32:13]   ^-- Set max direct memory size if getting 'OOME: Direct
> buffer memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM
> options)
>
> [15:32:13]   ^-- Disable processing of calls to System.gc() (add
> '-XX:+DisableExplicitGC' to JVM options)
>
> [15:32:13] Refer to this page for more performance suggestions:
> https://apacheignite.readme.io/docs/jvm-and-system-tuning
>
> [15:32:13]
>
> [15:32:13] To start Console Management & Monitoring run
> ignitevisorcmd.{sh|bat}
>
> [15:32:13]
>
> [15:32:13] Ignite node started OK (id=753b6c7e)
>
> [15:32:13] Topology snapshot [*ver=1, servers=1, clients=0, CPUs=4,
> heap=3.5GB*]
>
> ============================================================
> ===================================================
>
>
>
> I have no idea to connect both nodes and share the same oneCache under the
> above situation.
>
>
>
> If any further information is needed, I am glad to be informed and will
> provide to you as soon as possible.
>
>
>
> I am looking forward to hearing from you.
>
>
>
> Rick
>
>
>
>
>
>
>
> --
> 本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain
> confidential information. Please do not use or disclose it in any way and
> delete it if you are not the intended recipient.
>
>
>
> --
> 本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain
> confidential information. Please do not use or disclose it in any way and
> delete it if you are not the intended recipient.
>
>
>
>
>
> --
>
> Best regards,
> Andrey V. Mashenkov
>
>
> --
> 本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain
> confidential information. Please do not use or disclose it in any way and
> delete it if you are not the intended recipient.
>



-- 
Best regards,
Andrey V. Mashenkov

Reply via email to