Hi, I am running ignite servers on yarn cluster with properties PFA ignite-cluster.properties say on "host1". All server nodes are running on these host itself.
The servers running are with persistence storage enabled PFA ignite-config.xml configuration file. When I start an main program from another host say "host2" with same configs but in client mode. This main program consists simple select query to fetch data from ignite cluster, but I get following error: Caused by: javax.cache.CacheException: Failed to execute map query on the node: a2cf190a-6a44-4b94-baea-c9b88a16922e, class org.apache.ignite.IgniteCheckedException:Failed to execute SQL query. at org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.fail(GridReduceQueryExecutor.java:274) at org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onFail(GridReduceQueryExecutor.java:264) at org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onMessage(GridReduceQueryExecutor.java:243) at org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor$2.onMessage(GridReduceQueryExecutor.java:187) at org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage(GridIoManager.java:2332) at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555) at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183) at org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126) at org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090) ... 3 more But once I write insert query for same table from same main program and then I fire select query then the results are available, PFA "DriverStackTrace.txt" Also This issue is getting repeated every time, due to this I have to atleast insert one record to fire select query. Also I can see node "a2cf190a"(@n9) that is failing in ignitevisor PFA "top.txt" Regards, -- ------------------------------ The content of this e-mail is confidential and intended solely for the use of the addressee(s). The text of this email (including any attachments) may contain information, belonging to Pragmatix Services Private Limited, and/or its associates/ group companies/ subsidiaries (the Company), which is proprietary and/or confidential or legally privileged in nature or exempt from disclosure under applicable law. If you are not the addressee, or the person responsible for delivering it to the addressee, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it is prohibited and may be unlawful. If you have received this e-mail in error, please notify the sender and remove this communication entirely from your system. The recipient acknowledges that no guarantee or any warranty is given as to completeness and accuracy of the content of the email. The recipient further acknowledges that the views contained in the email message are those of the sender and may not necessarily reflect those of the Company. Before opening and accessing the attachment please check and scan for virus. Thank you. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The sender or the Company accepts no liability for any damage caused by any virus transmitted by this email or errors or omissions.
visor> top
Hosts: 2
+=================================================================================================================================+
| Int./Ext. IPs | Node ID8(@) | Node Type | OS
| CPUs | MACs | CPU Load |
+=================================================================================================================================+
| 0:0:0:0:0:0:0:1%lo | 1: C626F82C(@n0) | Server | Linux amd64
4.4.0-109-generic | 64 | 00:A2:EE:E8:B5:76 | 0.01 % |
| 10.10.13.36 | 2: 4CC62A12(@n1) | Server |
| | | |
| 127.0.0.1 | 3: 15E5CD4F(@n2) | Server |
| | | |
| | 4: 7C43286B(@n3) | Server |
| | | |
| | 5: 72A737BF(@n4) | Server |
| | | |
| | 6: 4CE19A41(@n5) | Server |
| | | |
| | 7: 2AE6B4D3(@n6) | Server |
| | | |
| | 8: 0117773D(@n7) | Server |
| | | |
| | 9: 3AC87367(@n8) | Server |
| | | |
| | 10: A2CF190A(@n9) | Server |
| | | |
| | 11: F600A88E(@n10) | Client |
| | | |
| | 12: 10D711D3(@n11) | Server |
| | | |
| | 13: E6C55B89(@n12) | Server |
| | | |
+--------------------+--------------------+-----------+-------------------------------+------+-------------------------+----------+
| 0:0:0:0:0:0:0:1 | 1: 9F43A223(@n13) | Client | Windows 8.1 amd64 6.3
| 4 | 00:00:00:00:00:00:00:E0 | 0.00 % |
| 10.10.12.119 | | |
| | 64:00:6A:76:4C:6D | |
| 127.0.0.1 | | |
| | | |
+---------------------------------------------------------------------------------------------------------------------------------+
Summary:
+-------------------------------------+
| Active | true |
| Total hosts | 2 |
| Total nodes | 14 |
| Total CPUs | 68 |
| Avg. CPU load | 0.01 % |
| Avg. free heap | 83.00 % |
| Avg. Up time | 00:47:01 |
| Snapshot time | 01/25/18, 17:19:26 |
+-------------------------------------+
at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1499)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1487)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1486)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1486)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
at scala.Option.foreach(Option.scala:257)
at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:814)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1714)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2043)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2062)
at
org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:336)
at
org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at
org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2853)
at
org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2153)
at
org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2153)
at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2837)
at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2836)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2153)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2366)
at org.apache.spark.sql.Dataset.takeAsList(Dataset.scala:2377)
at
com.pragmatix.platform.cubix.service.SparkQueryExceutorImpl.executeSql(SparkQueryExceutorImpl.scala:7720)
at
com.pragmatix.platform.cubix.service.SparkJdbcImpl.executeSql(SparkJdbcImpl.scala:12)
at
com.pragmatix.platform.driver.SparkDriverServiceImpl.executeSql(SparkDriverServiceImpl.java:79)
... 48 more
Caused by: javax.cache.CacheException: Failed to run map query remotely.
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:748)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$8.iterator(IgniteH2Indexing.java:1212)
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:95)
at
org.apache.ignite.spark.impl.IgniteSqlRDD.compute(IgniteSqlRDD.scala:40)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: javax.cache.CacheException: Failed to execute map query on the node:
a2cf190a-6a44-4b94-baea-c9b88a16922e, class
org.apache.ignite.IgniteCheckedException:Failed to execute SQL query.
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.fail(GridReduceQueryExecutor.java:274)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onFail(GridReduceQueryExecutor.java:264)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onMessage(GridReduceQueryExecutor.java:243)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor$2.onMessage(GridReduceQueryExecutor.java:187)
at
org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage(GridIoManager.java:2332)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
... 3 more<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="peerClassLoadingEnabled" value="true"/>
<!-- Enabling Apache Ignite Persistent Store. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="storagePath" value="/pragmatix/apache-ignite/apache-ignite-fabric-2.3.0-bin/work/"/>
<property name="walPath" value="/pragmatix/apache-ignite/apache-ignite-fabric-2.3.0-bin/work/wal"/>
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
</bean>
</property>
</bean>
</property>
<!-- Explicitly configure TCP discovery SPI to provide a list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>127.0.0.1:47500..47550</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
ignite-cluster.properties
Description: Binary data
