[ 
https://issues.apache.org/jira/browse/HIVE-12810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089142#comment-15089142
 ] 

Matjaz Skerjanec commented on HIVE-12810:
-----------------------------------------

I'm concerned about this procedure since we are just in the beginning of 
loadnig few meilion of entries every day to this table...it should work as 
expected not with workarounds.

Why do you think, select is working with 16 milion of entries and does not work 
with 34 milion of entries?
It is some kind of limitin parameter maybe to be set?

If I look into aplication log I can  see:
first few warnings: 
WARN [ORC_GET_SPLITS #0] ipc.Client: interrupted waiting to send rpc request to 
server

and later on error:

ERROR [Dispatcher thread: Central] impl.VertexImpl: Vertex Input: tablename 
initializer failed, vertex=vertex_1452091205505_0030_1_00 [Map 1]
org.apache.tez.dag.app.dag.impl.AMUserCodeException: 
java.lang.RuntimeException: serious problem

and java exception:

java.lang.IndexOutOfBoundsException: Index:

----------
#hdfs dfs -ls /apps/hive/warehouse/archive.db.db/tablename/part-m-00035

Container: container_e09_1452091205505_0030_01_000001 on 
hdp-node3.something.com_45454
===============================================================================
LogType:dag_1452091205505_0030_1.dot
Log Upload Time:Fri Jan 08 09:37:16 +0100 2016
LogLength:1102
Log Contents:
digraph hive_20160108092603_9b21f8a6_0a4a_45aa_9006_2e88328cd361_32 {
graph [ label="hive_20160108092603_9b21f8a6_0a4a_45aa_9006_2e88328cd361_32", 
fontsize=24, fontname=Helvetica];
node [fontsize=12, fontname=Helvetica];
edge [fontsize=9, fontcolor=blue, fontname=Arial];
"hive_20160108092603_9b21f8a6_0a4a_45aa_9006_2e88328cd361_32.Map_1" [ label = 
"Map_1[MapTezProcessor]" ];
"hive_20160108092603_9b21f8a6_0a4a_45aa_9006_2e88328cd361_32.Map_1" -> 
"hive_20160108092603_9b21f8a6_0a4a_45aa_9006_2e88328cd361_32.Map_1_out_Map_1" [ 
label = "Output [outputClass=MROutput,\n committer=]" ];
"hive_20160108092603_9b21f8a6_0a4a_45aa_9006_2e88328cd361_32.Map_1_tablename" [ 
label = "Map_1[tablename]", shape = "box" ];
"hive_20160108092603_9b21f8a6_0a4a_45aa_9006_2e88328cd361_32.Map_1_tablename" 
-> "hive_20160108092603_9b21f8a6_0a4a_45aa_9006_2e88328cd361_32.Map_1" [ label 
= "Input [inputClass=MRInputLegacy,\n initializer=HiveSplitGenerator]" ];
"hive_20160108092603_9b21f8a6_0a4a_45aa_9006_2e88328cd361_32.Map_1_out_Map_1" [ 
label = "Map_1[out_Map_1]", shape = "box" ];
}
End of LogType:dag_1452091205505_0030_1.dot

LogType:stderr
Log Upload Time:Fri Jan 08 09:37:16 +0100 2016
LogLength:118
Log Contents:
2016-01-08 09:26:10 Running Dag: dag_1452091205505_0030_1
2016-01-08 09:26:12 Completed Dag: dag_1452091205505_0030_1
End of LogType:stderr

LogType:stdout
Log Upload Time:Fri Jan 08 09:37:16 +0100 2016
LogLength:8503
Log Contents:
0.839: [GC pause (G1 Evacuation Pause) (young), 0.0122097 secs]
   [Parallel Time: 7.9 ms, GC Workers: 4]
      [GC Worker Start (ms): Min: 838.9, Avg: 839.0, Max: 839.1, Diff: 0.2]
      [Ext Root Scanning (ms): Min: 0.0, Avg: 2.2, Max: 4.1, Diff: 4.1, Sum: 
8.7]
      [Update RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
         [Processed Buffers: Min: 0, Avg: 0.0, Max: 0, Diff: 0, Sum: 0]
      [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.2]
      [Code Root Scanning (ms): Min: 0.0, Avg: 0.2, Max: 0.7, Diff: 0.7, Sum: 
0.7]
      [Object Copy (ms): Min: 0.0, Avg: 2.8, Max: 4.4, Diff: 4.4, Sum: 11.4]
      [Termination (ms): Min: 0.0, Avg: 0.3, Max: 0.4, Diff: 0.4, Sum: 1.0]
      [GC Worker Other (ms): Min: 0.0, Avg: 2.1, Max: 7.5, Diff: 7.5, Sum: 8.6]
      [GC Worker Total (ms): Min: 7.5, Avg: 7.6, Max: 7.8, Diff: 0.2, Sum: 30.6]
      [GC Worker End (ms): Min: 846.7, Avg: 846.7, Max: 846.7, Diff: 0.0]
   [Code Root Fixup: 0.2 ms]
   [Code Root Purge: 0.0 ms]
   [Clear CT: 0.4 ms]
   [Other: 3.6 ms]
      [Choose CSet: 0.0 ms]
      [Ref Proc: 2.6 ms]
      [Ref Enq: 0.0 ms]
      [Redirty Cards: 0.6 ms]
      [Humongous Reclaim: 0.0 ms]
      [Free CSet: 0.1 ms]
   [Eden: 18.0M(18.0M)->0.0B(23.0M) Survivors: 0.0B->3072.0K Heap: 
18.0M(368.0M)->2914.3K(368.0M)]
 [Times: user=0.02 sys=0.00, real=0.01 secs]
1.503: [GC pause (G1 Evacuation Pause) (young), 0.0107880 secs]
   [Parallel Time: 6.9 ms, GC Workers: 4]
      [GC Worker Start (ms): Min: 1502.7, Avg: 1502.8, Max: 1502.9, Diff: 0.2]
      [Ext Root Scanning (ms): Min: 0.5, Avg: 1.3, Max: 1.8, Diff: 1.3, Sum: 
5.4]
      [Update RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
         [Processed Buffers: Min: 0, Avg: 0.0, Max: 0, Diff: 0, Sum: 0]
      [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
      [Code Root Scanning (ms): Min: 0.0, Avg: 1.2, Max: 3.1, Diff: 3.1, Sum: 
4.8]
      [Object Copy (ms): Min: 2.1, Avg: 4.0, Max: 4.8, Diff: 2.8, Sum: 16.2]
      [Termination (ms): Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.2, Sum: 0.6]
      [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
      [GC Worker Total (ms): Min: 6.7, Avg: 6.8, Max: 6.9, Diff: 0.2, Sum: 27.1]
      [GC Worker End (ms): Min: 1509.6, Avg: 1509.6, Max: 1509.6, Diff: 0.0]
   [Code Root Fixup: 0.4 ms]
   [Code Root Purge: 0.0 ms]
   [Clear CT: 0.4 ms]
   [Other: 3.0 ms]
      [Choose CSet: 0.0 ms]
      [Ref Proc: 2.0 ms]
      [Ref Enq: 0.1 ms]
      [Redirty Cards: 0.5 ms]
      [Humongous Reclaim: 0.0 ms]
      [Free CSet: 0.1 ms]
   [Eden: 23.0M(23.0M)->0.0B(216.0M) Survivors: 3072.0K->4096.0K Heap: 
25.8M(368.0M)->5120.0K(368.0M)]
 [Times: user=0.03 sys=0.00, real=0.01 secs]
3.070: [GC pause (Metadata GC Threshold) (young) (initial-mark), 0.0228356 secs]
   [Parallel Time: 10.6 ms, GC Workers: 4]
      [GC Worker Start (ms): Min: 3070.7, Avg: 3071.2, Max: 3072.8, Diff: 2.1]
      [Ext Root Scanning (ms): Min: 0.1, Avg: 1.7, Max: 2.5, Diff: 2.4, Sum: 
6.9]
      [Update RS (ms): Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.2, Sum: 0.3]
         [Processed Buffers: Min: 0, Avg: 0.2, Max: 1, Diff: 1, Sum: 1]
      [Scan RS (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.2]
      [Code Root Scanning (ms): Min: 0.0, Avg: 1.1, Max: 2.2, Diff: 2.2, Sum: 
4.4]
      [Object Copy (ms): Min: 5.7, Avg: 6.9, Max: 7.8, Diff: 2.1, Sum: 27.5]
      [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
      [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
      [GC Worker Total (ms): Min: 8.3, Avg: 9.9, Max: 10.4, Diff: 2.1, Sum: 
39.5]
      [GC Worker End (ms): Min: 3081.1, Avg: 3081.1, Max: 3081.1, Diff: 0.0]
   [Code Root Fixup: 0.8 ms]
   [Code Root Purge: 0.0 ms]
   [Clear CT: 0.6 ms]
   [Other: 10.9 ms]
      [Choose CSet: 0.0 ms]
      [Ref Proc: 9.4 ms]
      [Ref Enq: 0.1 ms]
      [Redirty Cards: 0.5 ms]
      [Humongous Reclaim: 0.0 ms]
      [Free CSet: 0.4 ms]
   [Eden: 97.0M(216.0M)->0.0B(204.0M) Survivors: 4096.0K->16.0M Heap: 
102.0M(368.0M)->16.5M(368.0M)]
 [Times: user=0.04 sys=0.01, real=0.03 secs]
3.094: [GC concurrent-root-region-scan-start]
3.106: [GC concurrent-root-region-scan-end, 0.0122353 secs]
3.106: [GC concurrent-mark-start]
3.107: [GC concurrent-mark-end, 0.0008803 secs]
3.107: [GC remark 3.108: [Finalize Marking, 0.0010414 secs] 3.109: [GC 
ref-proc, 0.0001344 secs] 3.109: [Unloading, 0.0099256 secs], 0.0116719 secs]
 [Times: user=0.02 sys=0.00, real=0.01 secs]
3.120: [GC cleanup 17M->15M(368M), 0.0033051 secs]
 [Times: user=0.00 sys=0.00, real=0.01 secs]
3.123: [GC concurrent-cleanup-start]
3.123: [GC concurrent-cleanup-end, 0.0000432 secs]
5.768: [GC pause (G1 Evacuation Pause) (young), 0.0419872 secs]
   [Parallel Time: 32.1 ms, GC Workers: 4]
      [GC Worker Start (ms): Min: 5768.7, Avg: 5774.6, Max: 5776.9, Diff: 8.2]
      [Ext Root Scanning (ms): Min: 2.2, Avg: 7.9, Max: 17.6, Diff: 15.4, Sum: 
31.5]
      [Update RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
         [Processed Buffers: Min: 0, Avg: 0.2, Max: 1, Diff: 1, Sum: 1]
      [Scan RS (ms): Min: 0.0, Avg: 0.2, Max: 0.4, Diff: 0.4, Sum: 0.7]
      [Code Root Scanning (ms): Min: 0.0, Avg: 1.3, Max: 2.6, Diff: 2.6, Sum: 
5.0]
      [Object Copy (ms): Min: 14.2, Avg: 16.6, Max: 18.9, Diff: 4.7, Sum: 66.4]
      [Termination (ms): Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.2, Sum: 0.5]
      [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, Sum: 0.1]
      [GC Worker Total (ms): Min: 23.8, Avg: 26.1, Max: 32.0, Diff: 8.2, Sum: 
104.2]
      [GC Worker End (ms): Min: 5800.7, Avg: 5800.7, Max: 5800.7, Diff: 0.0]
   [Code Root Fixup: 0.9 ms]
   [Code Root Purge: 0.0 ms]
   [Clear CT: 0.2 ms]
   [Other: 8.8 ms]
      [Choose CSet: 0.0 ms]
      [Ref Proc: 7.8 ms]
      [Ref Enq: 0.1 ms]
      [Redirty Cards: 0.2 ms]
      [Humongous Reclaim: 0.0 ms]
      [Free CSet: 0.3 ms]
   [Eden: 204.0M(204.0M)->0.0B(200.0M) Survivors: 16.0M->20.0M Heap: 
219.5M(368.0M)->20.0M(368.0M)]
 [Times: user=0.11 sys=0.01, real=0.04 secs]
2016-01-08 09:26:10 Running Dag: dag_1452091205505_0030_1
6.426: [GC pause (Metadata GC Threshold) (young) (initial-mark), 0.0310678 secs]
   [Parallel Time: 24.8 ms, GC Workers: 4]
      [GC Worker Start (ms): Min: 6426.8, Avg: 6428.0, Max: 6431.4, Diff: 4.6]
      [Ext Root Scanning (ms): Min: 0.1, Avg: 3.4, Max: 5.7, Diff: 5.6, Sum: 
13.7]
      [Update RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
         [Processed Buffers: Min: 0, Avg: 0.0, Max: 0, Diff: 0, Sum: 0]
      [Scan RS (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.2]
      [Code Root Scanning (ms): Min: 0.0, Avg: 1.1, Max: 2.1, Diff: 2.1, Sum: 
4.4]
      [Object Copy (ms): Min: 17.9, Avg: 18.8, Max: 19.5, Diff: 1.6, Sum: 75.1]
      [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, Sum: 0.2]
      [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, Sum: 0.2]
      [GC Worker Total (ms): Min: 20.1, Avg: 23.5, Max: 24.6, Diff: 4.6, Sum: 
93.8]
      [GC Worker End (ms): Min: 6451.5, Avg: 6451.5, Max: 6451.5, Diff: 0.0]
   [Code Root Fixup: 1.2 ms]
   [Code Root Purge: 0.0 ms]
   [Clear CT: 0.6 ms]
   [Other: 4.5 ms]
      [Choose CSet: 0.0 ms]
      [Ref Proc: 3.0 ms]
      [Ref Enq: 0.0 ms]
      [Redirty Cards: 0.6 ms]
      [Humongous Reclaim: 0.0 ms]
      [Free CSet: 0.2 ms]
   [Eden: 46.0M(200.0M)->0.0B(206.0M) Survivors: 20.0M->14.0M Heap: 
65.9M(368.0M)->13.5M(368.0M)]
 [Times: user=0.09 sys=0.01, real=0.03 secs]
6.458: [GC concurrent-root-region-scan-start]
6.486: [GC concurrent-root-region-scan-end, 0.0282857 secs]
6.486: [GC concurrent-mark-start]
6.487: [GC concurrent-mark-end, 0.0004830 secs]
6.488: [GC remark 6.488: [Finalize Marking, 0.0004329 secs] 6.488: [GC 
ref-proc, 0.0002030 secs] 6.488: [Unloading, 0.0107436 secs], 0.0122303 secs]
 [Times: user=0.04 sys=0.00, real=0.01 secs]
6.500: [GC cleanup 14M->14M(368M), 0.0027615 secs]
 [Times: user=0.00 sys=0.01, real=0.00 secs]
6.503: [GC concurrent-cleanup-start]
6.503: [GC concurrent-cleanup-end, 0.0000442 secs]
2016-01-08 09:26:12 Completed Dag: dag_1452091205505_0030_1
Heap
 garbage-first heap   total 376832K, used 140799K [0x00000006f3400000, 
0x00000006f3500b80, 0x00000007c0000000)
  region size 1024K, 139 young (142336K), 14 survivors (14336K)
 Metaspace       used 42234K, capacity 42538K, committed 43008K, reserved 
1087488K
  class space    used 4993K, capacity 5067K, committed 5120K, reserved 1048576K
End of LogType:stdout

LogType:syslog
Log Upload Time:Fri Jan 08 09:37:16 +0100 2016
LogLength:10261
Log Contents:
2016-01-08 09:26:07,147 INFO [main] app.DAGAppMaster: Created DAGAppMaster for 
application appattempt_1452091205505_0030_000001, versionInfo=[ 
component=tez-dag, version=0.7.0.2.3.0.0-2557, 
revision=ab9a0295b8de3f8711cf2bd4d837cd0b66be7cfa, 
SCM-URL=scm:git:https://git-wip-us.apache.org/repos/asf/tez.git, 
buildTime=20150714-0943 ]
2016-01-08 09:26:07,181 INFO [main] app.DAGAppMaster: Comparing client version 
with AM version, clientVersion=0.7.0.2.3.0.0-2557, AMVersion=0.7.0.2.3.0.0-2557
2016-01-08 09:26:08,093 INFO [main] app.DAGAppMaster: Adding session token to 
jobTokenSecretManager for application
2016-01-08 09:26:08,101 INFO [main] common.AsyncDispatcher: Registering class 
org.apache.tez.dag.app.rm.container.AMContainerEventType for class 
org.apache.tez.dag.app.rm.container.AMContainerMap
2016-01-08 09:26:08,102 INFO [main] common.AsyncDispatcher: Registering class 
org.apache.tez.dag.app.rm.node.AMNodeEventType for class 
org.apache.tez.dag.app.rm.node.AMNodeTracker
2016-01-08 09:26:08,104 INFO [main] common.AsyncDispatcher: Registering class 
org.apache.tez.dag.app.dag.event.DAGAppMasterEventType for class 
org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterEventHandler
2016-01-08 09:26:08,105 INFO [main] common.AsyncDispatcher: Registering class 
org.apache.tez.dag.app.dag.event.DAGEventType for class 
org.apache.tez.dag.app.DAGAppMaster$DagEventDispatcher
2016-01-08 09:26:08,105 INFO [main] common.AsyncDispatcher: Registering class 
org.apache.tez.dag.app.dag.event.VertexEventType for class 
org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher
2016-01-08 09:26:08,106 INFO [main] common.AsyncDispatcher: Registering class 
org.apache.tez.dag.app.dag.event.TaskEventType for class 
org.apache.tez.dag.app.DAGAppMaster$TaskEventDispatcher
2016-01-08 09:26:08,107 INFO [main] common.AsyncDispatcher: Registering class 
org.apache.tez.dag.app.dag.event.TaskAttemptEventType for class 
org.apache.tez.dag.app.DAGAppMaster$TaskAttemptEventDispatcher
2016-01-08 09:26:08,109 INFO [main] common.AsyncDispatcher: Registering class 
org.apache.tez.dag.app.dag.event.SpeculatorEventType for independent dispatch 
using: class org.apache.tez.dag.app.DAGAppMaster$SpeculatorEventHandler
2016-01-08 09:26:08,109 INFO [main] common.AsyncDispatcher: Registering class 
org.apache.tez.dag.app.dag.event.SpeculatorEventType for class 
org.apache.tez.dag.app.DAGAppMaster$SpeculatorEventHandler
2016-01-08 09:26:08,184 INFO [main] common.AsyncDispatcher: Registering class 
org.apache.tez.dag.app.rm.AMSchedulerEventType for class 
org.apache.tez.dag.app.rm.TaskSchedulerEventHandler
2016-01-08 09:26:08,187 INFO [main] common.AsyncDispatcher: Registering class 
org.apache.tez.dag.app.rm.NMCommunicatorEventType for class 
org.apache.tez.dag.app.launcher.ContainerLauncherImpl
2016-01-08 09:26:08,344 INFO [main] node.AMNodeTracker: blacklistDisablePercent 
is 33, blacklistingEnabled: true, maxTaskFailuresPerNode: 10
2016-01-08 09:26:08,347 INFO [main] web.WebUIService: Tez UI History URL: 
http://hdp-master1.something.com:8080/#/main/views/TEZ/0.7.0.2.3.0.0-236/TEZ_CLUSTER_INSTANCE?viewPath=%2F%23%2Ftez-app%2Fapplication_1452091205505_0030
2016-01-08 09:26:08,349 INFO [main] launcher.ContainerLauncherImpl: Upper limit 
on the thread pool size is 500
2016-01-08 09:26:08,349 INFO [main] history.HistoryEventHandler: Initializing 
HistoryEventHandler
2016-01-08 09:26:08,369 INFO [main] ats.ATSHistoryLoggingService: Initializing 
ATSService
2016-01-08 09:26:08,932 INFO [main] impl.TimelineClientImpl: Timeline service 
address: http://hdp-master1.something.com:8188/ws/v1/timeline/
2016-01-08 09:26:08,932 INFO [main] ats.ATSHistoryLoggingService: Using 
org.apache.tez.dag.history.ats.acls.ATSHistoryACLPolicyManager to manage 
Timeline ACLs
2016-01-08 09:26:09,126 INFO [main] impl.TimelineClientImpl: Timeline service 
address: http://hdp-master1.something.com:8188/ws/v1/timeline/
2016-01-08 09:26:09,126 INFO [main] recovery.RecoveryService: Initializing 
RecoveryService
2016-01-08 09:26:09,130 INFO [main] history.HistoryEventHandler: 
[HISTORY][DAG:N/A][Event:APP_LAUNCHED]: 
applicationId=application_1452091205505_0030, appSubmitTime=1452241564250, 
launchTime=1452241567142
2016-01-08 09:26:09,131 INFO [main] history.HistoryEventHandler: 
[HISTORY][DAG:N/A][Event:AM_LAUNCHED]: 
appAttemptId=appattempt_1452091205505_0030_000001, appSubmitTime=1452241564250, 
launchTime=1452241567142
2016-01-08 09:26:09,141 INFO 
[ServiceThread:org.apache.tez.dag.history.HistoryEventHandler] 
history.HistoryEventHandler: Starting HistoryEventHandler
2016-01-08 09:26:09,142 INFO 
[ServiceThread:org.apache.tez.dag.history.HistoryEventHandler] 
ats.ATSHistoryLoggingService: Starting ATSService
2016-01-08 09:26:09,152 INFO 
[ServiceThread:org.apache.tez.dag.history.HistoryEventHandler] 
recovery.RecoveryService: Starting RecoveryService
2016-01-08 09:26:09,153 INFO 
[ServiceThread:org.apache.tez.dag.app.launcher.ContainerLauncherImpl] 
impl.ContainerManagementProtocolProxy: 
yarn.client.max-cached-nodemanagers-proxies : 0
2016-01-08 09:26:09,156 INFO 
[ServiceThread:org.apache.tez.dag.app.TaskAttemptListenerImpTezDag] 
ipc.CallQueueManager: Using callQueue class 
java.util.concurrent.LinkedBlockingQueue
2016-01-08 09:26:09,156 INFO [ServiceThread:DAGClientRPCServer] 
ipc.CallQueueManager: Using callQueue class 
java.util.concurrent.LinkedBlockingQueue
2016-01-08 09:26:09,179 INFO [Socket Reader #1 for port 55777] ipc.Server: 
Starting Socket Reader #1 for port 55777
2016-01-08 09:26:09,186 INFO [Socket Reader #1 for port 48707] ipc.Server: 
Starting Socket Reader #1 for port 48707
2016-01-08 09:26:09,228 INFO [IPC Server Responder] ipc.Server: IPC Server 
Responder: starting
2016-01-08 09:26:09,232 INFO [IPC Server listener on 55777] ipc.Server: IPC 
Server listener on 55777: starting
2016-01-08 09:26:09,253 INFO [IPC Server Responder] ipc.Server: IPC Server 
Responder: starting
2016-01-08 09:26:09,254 INFO [ServiceThread:DAGClientRPCServer] 
client.DAGClientServer: Instantiated DAGClientRPCServer at 
hdp-node3.something.com/10.4.31.59:48707
2016-01-08 09:26:09,254 INFO [IPC Server listener on 48707] ipc.Server: IPC 
Server listener on 48707: starting
2016-01-08 09:26:09,354 INFO 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-01-08 09:26:09,369 INFO 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] 
server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, 
falling back to use random secrets.
2016-01-08 09:26:09,377 INFO 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] http.HttpRequestLog: 
Http request log for http.requests. is not defined
2016-01-08 09:26:09,387 INFO 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] http.HttpServer2: Added 
global filter 'safety' 
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-01-08 09:26:09,394 INFO 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] http.HttpServer2: Added 
filter AM_PROXY_FILTER 
(class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context
2016-01-08 09:26:09,394 INFO 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] http.HttpServer2: Added 
filter AM_PROXY_FILTER 
(class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context 
static
2016-01-08 09:26:09,402 INFO 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] http.HttpServer2: 
adding path spec: /*
2016-01-08 09:26:09,418 INFO 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] http.HttpServer2: Jetty 
bound to port 52136
2016-01-08 09:26:09,418 INFO 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] mortbay.log: 
jetty-6.1.26.hwx
2016-01-08 09:26:09,489 INFO 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] mortbay.log: Extract 
jar:file:/hadoop/hadoop/yarn/local/filecache/11/tez.tar.gz/lib/hadoop-yarn-common-2.7.1.2.3.0.0-2557.jar!/webapps/
 to /tmp/Jetty_0_0_0_0_52136_webapps____hhonjy/webapp
2016-01-08 09:26:09,880 INFO 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] mortbay.log: Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:52136
2016-01-08 09:26:09,881 INFO 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] webapp.WebApps: Web app 
/ started at 52136
2016-01-08 09:26:10,217 INFO 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] webapp.WebApps: 
Registered webapp guice modules
2016-01-08 09:26:10,219 INFO 
[ServiceThread:org.apache.tez.dag.app.web.WebUIService] web.WebUIService: 
Instantiated WebUIService at http://hdp-node3.something.com:52136/ui/
2016-01-08 09:26:10,261 INFO 
[ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerEventHandler] 
rm.YarnTaskSchedulerService: TaskScheduler initialized with configuration: 
maxRMHeartbeatInterval: 250, containerReuseEnabled: true, reuseRackLocal: true, 
reuseNonLocal: false, localitySchedulingDelay: 250, preemptionPercentage: 10, 
numHeartbeatsBetweenPreemptions: 3, idleContainerMinTimeout: 10000, 
idleContainerMaxTimeout: 20000, sessionMinHeldContainers: 0
2016-01-08 09:26:10,317 INFO 
[ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerEventHandler] 
client.RMProxy: Connecting to ResourceManager at 
hdp-master1.something.com/10.4.31.56:8030
2016-01-08 09:26:10,401 INFO [main] history.HistoryEventHandler: 
[HISTORY][DAG:N/A][Event:AM_STARTED]: 
appAttemptId=appattempt_1452091205505_0030_000001, startTime=1452241570399
2016-01-08 09:26:10,401 INFO [main] app.DAGAppMaster: In Session mode. Waiting 
for DAG over RPC
2016-01-08 09:26:10,451 INFO [AMRM Callback Handler Thread] 
rm.YarnTaskSchedulerService: App total resource memory: 65536 cpu: 1 
taskAllocations: 0
2016-01-08 09:26:10,457 INFO [Dispatcher thread: Central] node.AMNodeTracker: 
Num cluster nodes = 3
2016-01-08 09:26:10,696 INFO [IPC Server handler 0 on 48707] app.DAGAppMaster: 
Starting DAG submitted via RPC: 
hive_20160108092603_9b21f8a6-0a4a-45aa-9006-2e88328cd361:32
2016-01-08 09:26:10,924 INFO [IPC Server handler 0 on 48707] app.DAGAppMaster: 
Generating DAG graphviz file, dagId=dag_1452091205505_0030_1, 
filePath=/hadoop/hadoop/yarn/log/application_1452091205505_0030/container_e09_1452091205505_0030_01_000001/dag_1452091205505_0030_1.dot
2016-01-08 09:26:10,929 INFO [IPC Server handler 0 on 48707] 
common.TezUtilsInternal: Redirecting log file based on addend: 
dag_1452091205505_0030_1
End of LogType:syslog

LogType:syslog_dag_1452091205505_0030_1
Log Upload Time:Fri Jan 08 09:37:16 +0100 2016
LogLength:30256
Log Contents:
2016-01-08 09:26:10,930 INFO [IPC Server handler 0 on 48707] app.DAGAppMaster: 
Running DAG: hive_20160108092603_9b21f8a6-0a4a-45aa-9006-2e88328cd361:32
2016-01-08 09:26:11,261 INFO [IPC Server handler 0 on 48707] 
history.HistoryEventHandler: 
[HISTORY][DAG:dag_1452091205505_0030_1][Event:DAG_SUBMITTED]: 
dagID=dag_1452091205505_0030_1, submitTime=1452241570697
2016-01-08 09:26:11,332 WARN [HistoryEventHandlingThread] 
ats.ATSHistoryLoggingService: Could not post history event to ATS, 
atsPutError=6, entityId=dag_1452091205505_0030_1
2016-01-08 09:26:11,338 WARN [HistoryEventHandlingThread] 
ats.ATSHistoryLoggingService: Could not post history event to ATS, 
atsPutError=6, entityId=dag_1452091205505_0030_1
2016-01-08 09:26:11,368 INFO [IPC Server handler 0 on 48707] impl.VertexImpl: 
setting additional outputs for vertex Map 1
2016-01-08 09:26:11,370 INFO [IPC Server handler 0 on 48707] impl.DAGImpl: 
Using DAG Scheduler: org.apache.tez.dag.app.dag.impl.DAGSchedulerNaturalOrder
2016-01-08 09:26:11,373 INFO [IPC Server handler 0 on 48707] 
history.HistoryEventHandler: 
[HISTORY][DAG:dag_1452091205505_0030_1][Event:DAG_INITIALIZED]: 
dagID=dag_1452091205505_0030_1, initTime=1452241571309
2016-01-08 09:26:11,374 INFO [IPC Server handler 0 on 48707] impl.DAGImpl: 
dag_1452091205505_0030_1 transitioned from NEW to INITED
2016-01-08 09:26:11,378 INFO [Dispatcher thread: Central] 
history.HistoryEventHandler: 
[HISTORY][DAG:dag_1452091205505_0030_1][Event:DAG_STARTED]: 
dagID=dag_1452091205505_0030_1, startTime=1452241571377
2016-01-08 09:26:11,379 INFO [Dispatcher thread: Central] impl.DAGImpl: Added 
additional resources : [[]] to classpath
2016-01-08 09:26:11,384 INFO [Dispatcher thread: Central] impl.DAGImpl: 
dag_1452091205505_0030_1 transitioned from INITED to RUNNING
2016-01-08 09:26:11,385 INFO [Dispatcher thread: Central] impl.VertexImpl: Root 
Inputs exist for Vertex: Map 1 : {tablename={InputName=tablename}, 
{Descriptor=ClassName=org.apache.tez.mapreduce.input.MRInputLegacy, 
hasPayload=true}, 
{ControllerDescriptor=ClassName=org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator,
 hasPayload=false}}
2016-01-08 09:26:11,386 INFO [Dispatcher thread: Central] impl.VertexImpl: 
Starting root input initializer for input: tablename, with class: 
[org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator]
2016-01-08 09:26:11,386 INFO [Dispatcher thread: Central] impl.VertexImpl: 
Setting vertexManager to RootInputVertexManager for 
vertex_1452091205505_0030_1_00 [Map 1]
2016-01-08 09:26:11,402 INFO [Dispatcher thread: Central] impl.VertexImpl: Num 
tasks is -1. Expecting VertexManager/InputInitializers/1-1 split to set #tasks 
for the vertex vertex_1452091205505_0030_1_00 [Map 1]
2016-01-08 09:26:11,402 INFO [Dispatcher thread: Central] impl.VertexImpl: 
Vertex will initialize from input initializer. vertex_1452091205505_0030_1_00 
[Map 1]
2016-01-08 09:26:11,405 INFO [Dispatcher thread: Central] impl.VertexImpl: 
Vertex will initialize via inputInitializers vertex_1452091205505_0030_1_00 
[Map 1]. Starting root input initializers: 1
2016-01-08 09:26:11,454 INFO [Dispatcher thread: Central] 
Configuration.deprecation: mapred.committer.job.setup.cleanup.needed is 
deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed
2016-01-08 09:26:11,456 INFO [Dispatcher thread: Central] 
Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, 
use mapreduce.input.fileinputformat.input.dir.recursive
2016-01-08 09:26:11,674 INFO [Dispatcher thread: Central] exec.Utilities: PLAN 
PATH = 
hdfs://hdp-master1.something.com:8020/tmp/hive/hive/ce1315c6-a203-459a-a5d2-116a46a1c14e/hive_2016-01-08_09-26-03_418_3119952734722954293-34/hive/_tez_scratch_dir/919e0d70-60c0-4f67-9199-99b02a9c67e2/map.xml
2016-01-08 09:26:11,674 INFO [Dispatcher thread: Central] exec.Utilities: 
***************non-local mode***************
2016-01-08 09:26:11,675 INFO [Dispatcher thread: Central] exec.Utilities: local 
path = 
hdfs://hdp-master1.something.com:8020/tmp/hive/hive/ce1315c6-a203-459a-a5d2-116a46a1c14e/hive_2016-01-08_09-26-03_418_3119952734722954293-34/hive/_tez_scratch_dir/919e0d70-60c0-4f67-9199-99b02a9c67e2/map.xml
2016-01-08 09:26:11,691 INFO [Dispatcher thread: Central] log.PerfLogger: 
<PERFLOG method=deserializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2016-01-08 09:26:11,691 INFO [Dispatcher thread: Central] exec.Utilities: 
Deserializing MapWork via kryo
2016-01-08 09:26:12,120 INFO [Dispatcher thread: Central] log.PerfLogger: 
</PERFLOG method=deserializePlan start=1452241571690 end=1452241572120 
duration=430 from=org.apache.hadoop.hive.ql.exec.Utilities>
2016-01-08 09:26:12,134 INFO [InputInitializer [Map 1] #0] 
dag.RootInputInitializerManager: Starting InputInitializer for Input: tablename 
on vertex vertex_1452091205505_0030_1_00 [Map 1]
2016-01-08 09:26:12,154 INFO [Dispatcher thread: Central] impl.VertexImpl: 
vertex_1452091205505_0030_1_00 [Map 1] transitioned from NEW to INITIALIZING 
due to event V_INIT
2016-01-08 09:26:12,167 INFO [InputInitializer [Map 1] #0] 
tez.HiveSplitGenerator: The preferred split size is 16777216
2016-01-08 09:26:12,168 INFO [InputInitializer [Map 1] #0] log.PerfLogger: 
<PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.HiveInputFormat>
2016-01-08 09:26:12,169 INFO [InputInitializer [Map 1] #0] exec.Utilities: PLAN 
PATH = 
hdfs://hdp-master1.something.com:8020/tmp/hive/hive/ce1315c6-a203-459a-a5d2-116a46a1c14e/hive_2016-01-08_09-26-03_418_3119952734722954293-34/hive/_tez_scratch_dir/919e0d70-60c0-4f67-9199-99b02a9c67e2/map.xml
2016-01-08 09:26:12,171 INFO [InputInitializer [Map 1] #0] exec.Utilities: 
Processing alias tablename
2016-01-08 09:26:12,171 INFO [InputInitializer [Map 1] #0] exec.Utilities: 
Adding input file 
hdfs://hdp-master1.something.com:8020/apps/hive/warehouse/archive.db/tablename
2016-01-08 09:26:12,199 INFO [InputInitializer [Map 1] #0] io.HiveInputFormat: 
hive.io.file.readcolumn.ids=1,6
2016-01-08 09:26:12,199 INFO [InputInitializer [Map 1] #0] io.HiveInputFormat: 
hive.io.file.readcolumn.names=messageid,originalxml
2016-01-08 09:26:12,199 INFO [InputInitializer [Map 1] #0] io.HiveInputFormat: 
Generating splits
2016-01-08 09:26:12,238 INFO [InputInitializer [Map 1] #0] log.PerfLogger: 
<PERFLOG method=OrcGetSplits from=org.apache.hadoop.hive.ql.io.orc.ReaderImpl>
2016-01-08 09:26:12,241 INFO [InputInitializer [Map 1] #0] 
Configuration.deprecation: mapred.input.dir is deprecated. Instead, use 
mapreduce.input.fileinputformat.inputdir
2016-01-08 09:26:12,605 WARN [ORC_GET_SPLITS #0] ipc.Client: interrupted 
waiting to send rpc request to server
java.lang.InterruptedException
        at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404)
        at java.util.concurrent.FutureTask.get(FutureTask.java:191)
        at 
org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1057)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.Client.call(Client.java:1358)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
        at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
        at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:358)
        at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:314)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:237)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:924)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:836)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:702)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
2016-01-08 09:26:12,615 INFO [InputInitializer [Map 1] #0] 
dag.RootInputInitializerManager: Failed InputInitializer for Input: tablename 
on vertex vertex_1452091205505_0030_1_00 [Map 1]
2016-01-08 09:26:12,613 WARN [ORC_GET_SPLITS #7] ipc.Client: interrupted 
waiting to send rpc request to server
java.lang.InterruptedException
        at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404)
        at java.util.concurrent.FutureTask.get(FutureTask.java:191)
        at 
org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1057)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.Client.call(Client.java:1358)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
        at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
        at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:358)
        at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:314)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:237)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:924)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:836)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:702)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
2016-01-08 09:26:12,613 WARN [ORC_GET_SPLITS #1] ipc.Client: interrupted 
waiting to send rpc request to server
java.lang.InterruptedException
        at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404)
        at java.util.concurrent.FutureTask.get(FutureTask.java:191)
        at 
org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1057)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.Client.call(Client.java:1358)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
        at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
        at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:358)
        at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:314)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:237)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:924)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:836)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:702)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
2016-01-08 09:26:12,609 WARN [ORC_GET_SPLITS #4] ipc.Client: interrupted 
waiting to send rpc request to server
java.lang.InterruptedException
        at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404)
        at java.util.concurrent.FutureTask.get(FutureTask.java:191)
        at 
org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1057)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.Client.call(Client.java:1358)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
        at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
        at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:358)
        at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:314)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:237)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:924)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:836)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:702)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
2016-01-08 09:26:12,627 ERROR [Dispatcher thread: Central] impl.VertexImpl: 
Vertex Input: tablename initializer failed, 
vertex=vertex_1452091205505_0030_1_00 [Map 1]
org.apache.tez.dag.app.dag.impl.AMUserCodeException: 
java.lang.RuntimeException: serious problem
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallback.onFailure(RootInputInitializerManager.java:291)
        at com.google.common.util.concurrent.Futures$4.run(Futures.java:1140)
        at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
        at 
com.google.common.util.concurrent.ExecutionList$RunnableExecutorPair.execute(ExecutionList.java:150)
        at 
com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:135)
        at 
com.google.common.util.concurrent.ListenableFutureTask.done(ListenableFutureTask.java:91)
        at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:384)
        at java.util.concurrent.FutureTask.setException(FutureTask.java:251)
        at java.util.concurrent.FutureTask.run(FutureTask.java:271)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: serious problem
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1021)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1048)
        at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:306)
        at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:408)
        at 
org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:155)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        ... 3 more
Caused by: java.util.concurrent.ExecutionException: 
java.lang.IndexOutOfBoundsException: Index: 0
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:192)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1016)
        ... 15 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0
        at java.util.Collections$EmptyList.get(Collections.java:4454)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcProto$Type.getSubtypes(OrcProto.java:12240)
        at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.getColumnIndicesFromNames(ReaderImpl.java:649)
        at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.getRawDataSizeOfColumns(ReaderImpl.java:632)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:927)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:836)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:702)
        ... 4 more
2016-01-08 09:26:12,643 INFO [Dispatcher thread: Central] impl.VertexImpl: 
Invoking committer abort for vertex, vertexId=vertex_1452091205505_0030_1_00 
[Map 1]
2016-01-08 09:26:12,700 INFO [Dispatcher thread: Central] 
history.HistoryEventHandler: 
[HISTORY][DAG:dag_1452091205505_0030_1][Event:VERTEX_FINISHED]: vertexName=Map 
1, vertexId=vertex_1452091205505_0030_1_00, initRequestedTime=1452241571385, 
initedTime=0, startRequestedTime=1452241572166, startedTime=0, 
finishTime=1452241572645, timeTaken=1452241572645, status=FAILED, 
diagnostics=Vertex vertex_1452091205505_0030_1_00 [Map 1] killed/failed due 
to:ROOT_INPUT_INIT_FAILURE
Vertex Input: tablename initializer failed, 
vertex=vertex_1452091205505_0030_1_00 [Map 1], java.lang.RuntimeException: 
serious problem
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1021)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1048)
        at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:306)
        at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:408)
        at 
org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:155)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: 
java.lang.IndexOutOfBoundsException: Index: 0
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:192)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1016)
        ... 15 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0
        at java.util.Collections$EmptyList.get(Collections.java:4454)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcProto$Type.getSubtypes(OrcProto.java:12240)
        at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.getColumnIndicesFromNames(ReaderImpl.java:649)
        at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.getRawDataSizeOfColumns(ReaderImpl.java:632)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:927)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:836)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:702)
        ... 4 more
, counters=Counters: 0, vertexStats=firstTaskStartTime=-1, firstTasksToStart=[  
], lastTaskFinishTime=-1, lastTasksToFinish=[  ], minTaskDuration=-1, 
maxTaskDuration=-1, avgTaskDuration=-1.0, numSuccessfulTasks=0, 
shortestDurationTasks=[  ], longestDurationTasks=[  ], 
vertexTaskStats={numFailedTaskAttempts=0, numKilledTaskAttempts=0, 
numCompletedTasks=0, numSucceededTasks=0, numKilledTasks=0, numFailedTasks=0}
2016-01-08 09:26:12,701 INFO [Dispatcher thread: Central] impl.VertexImpl: 
vertex_1452091205505_0030_1_00 [Map 1] transitioned from INITIALIZING to FAILED 
due to event V_ROOT_INPUT_FAILED
2016-01-08 09:26:12,703 INFO [Dispatcher thread: Central] impl.DAGImpl: Vertex 
vertex_1452091205505_0030_1_00 [Map 1] completed., numCompletedVertices=1, 
numSuccessfulVertices=0, numFailedVertices=1, numKilledVertices=0, numVertices=1
2016-01-08 09:26:12,703 INFO [Dispatcher thread: Central] impl.DAGImpl: 
Checking vertices for DAG completion, numCompletedVertices=1, 
numSuccessfulVertices=0, numFailedVertices=1, numKilledVertices=0, 
numVertices=1, commitInProgress=0, terminationCause=VERTEX_FAILURE
2016-01-08 09:26:12,703 INFO [Dispatcher thread: Central] impl.DAGImpl: DAG did 
not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0
2016-01-08 09:26:12,720 INFO [ORC_GET_SPLITS #9] orc.OrcInputFormat: ORC 
pushdown predicate: leaf-0 = (EQUALS messageid 
d0b3c872-435d-499b-a65c-619d9e732bbb)
expr = leaf-0
2016-01-08 09:26:12,720 INFO [ORC_GET_SPLITS #2] orc.OrcInputFormat: ORC 
pushdown predicate: leaf-0 = (EQUALS messageid 
d0b3c872-435d-499b-a65c-619d9e732bbb)
expr = leaf-0
2016-01-08 09:26:12,721 INFO [ORC_GET_SPLITS #6] orc.OrcInputFormat: ORC 
pushdown predicate: leaf-0 = (EQUALS messageid 
d0b3c872-435d-499b-a65c-619d9e732bbb)
expr = leaf-0
2016-01-08 09:26:12,722 INFO [ORC_GET_SPLITS #3] orc.OrcInputFormat: ORC 
pushdown predicate: leaf-0 = (EQUALS messageid 
d0b3c872-435d-499b-a65c-619d9e732bbb)
expr = leaf-0
2016-01-08 09:26:12,720 INFO [ORC_GET_SPLITS #5] orc.OrcInputFormat: ORC 
pushdown predicate: leaf-0 = (EQUALS messageid 
d0b3c872-435d-499b-a65c-619d9e732bbb)
expr = leaf-0
2016-01-08 09:26:12,723 INFO [ORC_GET_SPLITS #8] orc.OrcInputFormat: ORC 
pushdown predicate: leaf-0 = (EQUALS messageid 
d0b3c872-435d-499b-a65c-619d9e732bbb)
expr = leaf-0
2016-01-08 09:26:12,801 INFO [Dispatcher thread: Central] counters.Limits: 
Counter limits initialized with parameters:  GROUP_NAME_MAX=256, 
MAX_GROUPS=1000, COUNTER_NAME_MAX=64, MAX_COUNTERS=2000
2016-01-08 09:26:12,887 INFO [Dispatcher thread: Central] 
recovery.RecoveryService: DAG completed, dagId=dag_1452091205505_0030_1, 
queueSize=0
2016-01-08 09:26:12,905 INFO [Dispatcher thread: Central] 
history.HistoryEventHandler: 
[HISTORY][DAG:dag_1452091205505_0030_1][Event:DAG_FINISHED]: 
dagId=dag_1452091205505_0030_1, startTime=1452241571377, 
finishTime=1452241572808, timeTaken=1431, status=FAILED, diagnostics=Vertex 
failed, vertexName=Map 1, vertexId=vertex_1452091205505_0030_1_00, 
diagnostics=[Vertex vertex_1452091205505_0030_1_00 [Map 1] killed/failed due 
to:ROOT_INPUT_INIT_FAILURE, Vertex Input: tablename initializer failed, 
vertex=vertex_1452091205505_0030_1_00 [Map 1], java.lang.RuntimeException: 
serious problem
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1021)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1048)
        at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:306)
        at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:408)
        at 
org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:155)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: 
java.lang.IndexOutOfBoundsException: Index: 0
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:192)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1016)
        ... 15 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0
        at java.util.Collections$EmptyList.get(Collections.java:4454)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcProto$Type.getSubtypes(OrcProto.java:12240)
        at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.getColumnIndicesFromNames(ReaderImpl.java:649)
        at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.getRawDataSizeOfColumns(ReaderImpl.java:632)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:927)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:836)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:702)
        ... 4 more
]
DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0, 
counters=Counters: 2, org.apache.tez.common.counters.DAGCounter, 
AM_CPU_MILLISECONDS=4460, AM_GC_TIME_MILLIS=32
2016-01-08 09:26:12,905 INFO [Dispatcher thread: Central] impl.VertexImpl: 
Ignoring multiple aborts for vertex: vertex_1452091205505_0030_1_00 [Map 1]
2016-01-08 09:26:12,906 INFO [Dispatcher thread: Central] impl.DAGImpl: DAG: 
dag_1452091205505_0030_1 finished with state: FAILED
2016-01-08 09:26:12,906 INFO [Dispatcher thread: Central] impl.DAGImpl: 
dag_1452091205505_0030_1 transitioned from RUNNING to FAILED
2016-01-08 09:26:12,907 INFO [Dispatcher thread: Central] app.DAGAppMaster: DAG 
completed, dagId=dag_1452091205505_0030_1, dagState=FAILED
2016-01-08 09:26:12,907 INFO [Dispatcher thread: Central] 
common.TezUtilsInternal: Redirecting log file based on addend: 
dag_1452091205505_0030_1_post
End of LogType:syslog_dag_1452091205505_0030_1

LogType:syslog_dag_1452091205505_0030_1_post
Log Upload Time:Fri Jan 08 09:37:16 +0100 2016
LogLength:4819
Log Contents:
2016-01-08 09:26:12,908 INFO [Dispatcher thread: Central] app.DAGAppMaster: 
Central Dispatcher queue size after DAG completion, before cleanup: 0
2016-01-08 09:26:12,908 INFO [Dispatcher thread: Central] app.DAGAppMaster: 
Waiting for next DAG to be submitted.
2016-01-08 09:26:12,911 INFO [Dispatcher thread: Central] app.DAGAppMaster: 
Cleaning up DAG: 
name=hive_20160108092603_9b21f8a6-0a4a-45aa-9006-2e88328cd361:32, with 
id=dag_1452091205505_0030_1
2016-01-08 09:26:12,919 INFO [Dispatcher thread: Central] app.DAGAppMaster: 
Completed cleanup for DAG: 
name=hive_20160108092603_9b21f8a6-0a4a-45aa-9006-2e88328cd361:32, with 
id=dag_1452091205505_0030_1
2016-01-08 09:37:10,402 INFO [Timer-1] app.DAGAppMaster: Session timed out, 
lastDAGCompletionTime=1452241572907 ms, sessionTimeoutInterval=600000 ms
2016-01-08 09:37:10,404 INFO [Timer-1] rm.TaskSchedulerEventHandler: 
TaskScheduler notified that it should unregister from RM
2016-01-08 09:37:10,405 INFO [Timer-1] app.DAGAppMaster: No current running 
DAG, shutting down the AM
2016-01-08 09:37:10,405 INFO [Timer-1] app.DAGAppMaster: 
DAGAppMasterShutdownHandler invoked
2016-01-08 09:37:10,405 INFO [Timer-1] app.DAGAppMaster: Handling DAGAppMaster 
shutdown
2016-01-08 09:37:10,407 INFO [AMShutdownThread] app.DAGAppMaster: Sleeping for 
5 seconds before shutting down
2016-01-08 09:37:15,408 INFO [AMShutdownThread] app.DAGAppMaster: Calling stop 
for all the services
2016-01-08 09:37:15,411 INFO [AMShutdownThread] history.HistoryEventHandler: 
Stopping HistoryEventHandler
2016-01-08 09:37:15,411 INFO [AMShutdownThread] recovery.RecoveryService: 
Stopping RecoveryService
2016-01-08 09:37:15,411 INFO [AMShutdownThread] recovery.RecoveryService: 
Handle the remaining events in queue, queue size=0
2016-01-08 09:37:15,412 INFO [RecoveryEventHandlingThread] 
recovery.RecoveryService: EventQueue take interrupted. Returning
2016-01-08 09:37:15,412 INFO [AMShutdownThread] recovery.RecoveryService: 
Closing Summary Stream
2016-01-08 09:37:15,423 INFO [AMShutdownThread] ats.ATSHistoryLoggingService: 
Stopping ATSService, eventQueueBacklog=0
2016-01-08 09:37:15,425 INFO [DelayedContainerManager] 
rm.YarnTaskSchedulerService: AllocatedContainerManager Thread interrupted
2016-01-08 09:37:15,431 INFO [AMShutdownThread] rm.YarnTaskSchedulerService: 
Unregistering application from RM, exitStatus=SUCCEEDED, exitMessage=Session 
stats:submittedDAGs=1, successfulDAGs=0, failedDAGs=1, killedDAGs=0
, 
trackingURL=http://hdp-master1.something.com:8080/#/main/views/TEZ/0.7.0.2.3.0.0-236/TEZ_CLUSTER_INSTANCE?viewPath=%2F%23%2Ftez-app%2Fapplication_1452091205505_0030
2016-01-08 09:37:15,443 INFO [AMShutdownThread] impl.AMRMClientImpl: Waiting 
for application to be successfully unregistered.
2016-01-08 09:37:15,545 INFO [AMShutdownThread] rm.YarnTaskSchedulerService: 
Successfully unregistered application from RM
2016-01-08 09:37:15,548 INFO [AMRM Callback Handler Thread] 
impl.AMRMClientAsyncImpl: Interrupted while waiting for queue
java.lang.InterruptedException
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
        at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at 
org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:287)
2016-01-08 09:37:15,576 INFO [AMShutdownThread] mortbay.log: Stopped 
HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:0
2016-01-08 09:37:15,580 INFO [AMShutdownThread] ipc.Server: Stopping server on 
55777
2016-01-08 09:37:15,585 INFO [IPC Server listener on 55777] ipc.Server: 
Stopping IPC Server listener on 55777
2016-01-08 09:37:15,585 INFO [AMShutdownThread] ipc.Server: Stopping server on 
48707
2016-01-08 09:37:15,586 INFO [IPC Server Responder] ipc.Server: Stopping IPC 
Server Responder
2016-01-08 09:37:15,586 INFO [IPC Server listener on 48707] ipc.Server: 
Stopping IPC Server listener on 48707
2016-01-08 09:37:15,587 INFO [IPC Server Responder] ipc.Server: Stopping IPC 
Server Responder
2016-01-08 09:37:15,592 INFO [Thread-2] app.DAGAppMaster: 
DAGAppMasterShutdownHook invoked
2016-01-08 09:37:15,592 INFO [Thread-2] app.DAGAppMaster: The shutdown handler 
is still running, waiting for it to complete
2016-01-08 09:37:15,601 INFO [AMShutdownThread] app.DAGAppMaster: Completed 
deletion of tez scratch data dir, 
path=hdfs://hdp-master1.something.com:8020/tmp/hive/hive/_tez_session_dir/76022374-fe62-4835-a80d-6f20ee57c7ba/.tez/application_1452091205505_0030
2016-01-08 09:37:15,602 INFO [AMShutdownThread] app.DAGAppMaster: Exiting 
DAGAppMaster..GoodBye!
2016-01-08 09:37:15,602 INFO [Thread-2] app.DAGAppMaster: The shutdown handler 
has completed
End of LogType:syslog_dag_1452091205505_0030_1_post





> Hive select fails - java.lang.IndexOutOfBoundsException
> -------------------------------------------------------
>
>                 Key: HIVE-12810
>                 URL: https://issues.apache.org/jira/browse/HIVE-12810
>             Project: Hive
>          Issue Type: Bug
>          Components: Beeline, CLI
>    Affects Versions: 1.2.1
>         Environment: HDP 2.3.0
>            Reporter: Matjaz Skerjanec
>
> Hadoop HDP 2.3 (Hadoop 2.7.1.2.3.0.0-2557)
> Hive 1.2.1.2.3.0.0-2557
> We are loading orc tables in hive with sqoop from hana db.
> Everything works fine, count and select with ie. 16.000.000 entries in the 
> table, but when we load 34.000.000 entries query select does not work anymore 
> and we get the followong error (select count(*) is working in both cases):
> {code}
> select count(*) from tablename;
> INFO  : Session is already open
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1452091205505_0032)
> INFO  : Map 1: -/-      Reducer 2: 0/1
> INFO  : Map 1: 0/96     Reducer 2: 0/1
> .
> .
> .
> INFO  : Map 1: 96/96    Reducer 2: 0(+1)/1
> INFO  : Map 1: 96/96    Reducer 2: 1/1
> +-----------+--+
> |    _c0    |
> +-----------+--+
> | 34146816  |
> +-----------+--+
> 1 row selected (45.455 seconds)
> {code}
> {code}
> "select originalxml from tablename where messageid = 
> 'd0b3c872-435d-499b-a65c-619d9e732bbb'
> 0: jdbc:hive2://10.4.zz.xx:10000/default> select originalxml from tablename 
> where messageid = 'd0b3c872-435d-499b-a65c-619d9e732bbb';
> INFO  : Session is already open
> INFO  : Tez session was closed. Reopening...
> INFO  : Session re-established.
> INFO  :
> INFO  : Status: Running (Executing on YARN cluster with App id 
> application_1452091205505_0032)
> INFO  : Map 1: -/-
> ERROR : Status: Failed
> ERROR : Vertex failed, vertexName=Map 1, 
> vertexId=vertex_1452091205505_0032_1_00, diagnostics=[Vertex 
> vertex_1452091205505_0032_1_00 [Map 1] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: tablename initializer failed, 
> vertex=vertex_1452091205505_0032_1_00 [Map 1], java.lang.RuntimeException: 
> serious problem
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1021)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1048)
>         at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:306)
>         at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:408)
>         at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:155)
>         at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
>         at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>         at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
>         at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IndexOutOfBoundsException: Index: 0
>         at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>         at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1016)
>         ... 15 more
> Caused by: java.lang.IndexOutOfBoundsException: Index: 0
>         at java.util.Collections$EmptyList.get(Collections.java:4454)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcProto$Type.getSubtypes(OrcProto.java:12240)
>         at 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.getColumnIndicesFromNames(ReaderImpl.java:649)
>         at 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.getRawDataSizeOfColumns(ReaderImpl.java:632)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:927)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:836)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:702)
>         ... 4 more
> ]
> ERROR : DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 
> killedVertices:0
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, 
> vertexName=Map 1, vertexId=vertex_1452091205505_0032_1_00, 
> diagnostics=[Vertex vertex_1452091205505_0032_1_00 [Map 1] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: tablename initializer failed, 
> vertex=vertex_1452091205505_0032_1_00 [Map 1], java.lang.RuntimeException: 
> serious problem
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1021)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1048)
>         at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:306)
>         at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:408)
>         at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:155)
>         at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
>         at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>         at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
>         at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IndexOutOfBoundsException: Index: 0
>         at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>         at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1016)
>         ... 15 more
> Caused by: java.lang.IndexOutOfBoundsException: Index: 0
>         at java.util.Collections$EmptyList.get(Collections.java:4454)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcProto$Type.getSubtypes(OrcProto.java:12240)
>         at 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.getColumnIndicesFromNames(ReaderImpl.java:649)
>         at 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.getRawDataSizeOfColumns(ReaderImpl.java:632)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:927)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:836)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:702)
>         ... 4 more
> ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0 
> (state=08S01,code=2)
> 0: jdbc:hive2://10.4.zz.xx:10000/default>
> {code}
> If anybody can help regarding this issue I will appreciate.
> thanks,
> maske



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to