Which Spark / hadoop release are you using ?

Thanks

On Wed, Jan 6, 2016 at 12:16 PM, Nikhil Gs <gsnikhil1432...@gmail.com>
wrote:

> Hello Team,
>
>
> Thank you for your time in advance.
>
>
> Below are the log lines of my spark job which is used for consuming the
> messages from Kafka Instance and its loading to Hbase. I have noticed the
> below Warn lines and later it resulted to errors. But I noticed that,
> exactly after 7 days the token is getting expired and its trying to renew
> the token but its not able to even after retrying it. Mine is a Kerberos
> cluster. Can you please look into it and guide me whats the issue.
>
>
> Your time and suggestions are very valuable.
>
>
>
> 15/12/29 11:33:50 INFO scheduler.JobScheduler: Finished job streaming job
> 1451410430000 ms.0 from job set of time 1451410430000 ms
>
> 15/12/29 11:33:50 INFO scheduler.JobScheduler: Starting job streaming job
> 1451410430000 ms.1 from job set of time 1451410430000 ms
>
> 15/12/29 11:33:50 INFO scheduler.JobScheduler: Finished job streaming job
> 1451410430000 ms.1 from job set of time 1451410430000 ms
>
> 15/12/29 11:33:50 INFO rdd.BlockRDD: Removing RDD 120956 from persistence
> list
>
> 15/12/29 11:33:50 INFO scheduler.JobScheduler: Total delay: 0.003 s for
> time 1451410430000 ms (execution: 0.000 s)
>
> 15/12/29 11:33:50 INFO storage.BlockManager: Removing RDD 120956
>
> 15/12/29 11:33:50 INFO kafka.KafkaInputDStream: Removing blocks of RDD
> BlockRDD[120956] at createStream at SparkStreamingEngine.java:40 of time
> 1451410430000 ms
>
> 15/12/29 11:33:50 INFO rdd.MapPartitionsRDD: Removing RDD 120957 from
> persistence list
>
> 15/12/29 11:33:50 INFO storage.BlockManager: Removing RDD 120957
>
> 15/12/29 11:33:50 INFO scheduler.ReceivedBlockTracker: Deleting batches
> ArrayBuffer(1451410410000 ms)
>
> 15/12/29 11:34:00 INFO scheduler.JobScheduler: Added jobs for time
> 1451410440000 ms
>
> 15/12/29 11:34:00 INFO scheduler.JobScheduler: Starting job streaming job
> 1451410440000 ms.0 from job set of time 1451410440000 ms
>
> 15/12/29 11:34:00 INFO scheduler.JobScheduler: Finished job streaming job
> 1451410440000 ms.0 from job set of time 1451410440000 ms
>
> 15/12/29 11:34:00 INFO scheduler.JobScheduler: Starting job streaming job
> 1451410440000 ms.1 from job set of time 1451410440000 ms
>
> 15/12/29 11:34:00 INFO scheduler.JobScheduler: Finished job streaming job
> 1451410440000 ms.1 from job set of time 1451410440000 ms
>
> 15/12/29 11:34:00 INFO rdd.BlockRDD: Removing RDD 120958 from persistence
> list
>
> 15/12/29 11:34:00 INFO scheduler.JobScheduler: Total delay: 0.003 s for
> time 1451410440000 ms (execution: 0.001 s)
>
> 15/12/29 11:34:00 INFO storage.BlockManager: Removing RDD 120958
>
> 15/12/29 11:34:00 INFO kafka.KafkaInputDStream: Removing blocks of RDD
> BlockRDD[120958] at createStream at SparkStreamingEngine.java:40 of time
> 1451410440000 ms
>
> 15/12/29 11:34:00 INFO rdd.MapPartitionsRDD: Removing RDD 120959 from
> persistence list
>
> 15/12/29 11:34:00 INFO storage.BlockManager: Removing RDD 120959
>
> 15/12/29 11:34:00 INFO scheduler.ReceivedBlockTracker: Deleting batches
> ArrayBuffer(1451410420000 ms)
>
> 15/12/29 11:34:10 INFO scheduler.JobScheduler: Added jobs for time
> 1451410450000 ms
>
> 15/12/29 11:34:10 INFO scheduler.JobScheduler: Starting job streaming job
> 1451410450000 ms.0 from job set of time 1451410450000 ms
>
> 15/12/29 11:34:10 INFO scheduler.JobScheduler: Finished job streaming job
> 1451410450000 ms.0 from job set of time 1451410450000 ms
>
> 15/12/29 11:34:10 INFO scheduler.JobScheduler: Starting job streaming job
> 1451410450000 ms.1 from job set of time 1451410450000 ms
>
> 15/12/29 11:34:10 INFO scheduler.JobScheduler: Finished job streaming job
> 1451410450000 ms.1 from job set of time 1451410450000 ms
>
> 15/12/29 11:34:10 INFO rdd.BlockRDD: Removing RDD 120960 from persistence
> list
>
> 15/12/29 11:34:10 INFO scheduler.JobScheduler: Total delay: 0.004 s for
> time 1451410450000 ms (execution: 0.001 s)
>
> 15/12/29 11:34:10 INFO storage.BlockManager: Removing RDD 120960
>
> 15/12/29 11:34:10 INFO kafka.KafkaInputDStream: Removing blocks of RDD
> BlockRDD[120960] at createStream at SparkStreamingEngine.java:40 of time
> 1451410450000 ms
>
> 15/12/29 11:34:10 INFO rdd.MapPartitionsRDD: Removing RDD 120961 from
> persistence list
>
> 15/12/29 11:34:10 INFO storage.BlockManager: Removing RDD 120961
>
> 15/12/29 11:34:10 INFO scheduler.ReceivedBlockTracker: Deleting batches
> ArrayBuffer(1451410430000 ms)
>
> 15/12/29 11:34:13 WARN security.UserGroupInformation:
> PriviledgedActionException as:sssssllllllll (auth:SIMPLE)
> cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
> token (HDFS_DELEGATION_TOKEN token 3104414 for sssssllllllll) is expired
>
> 15/12/29 11:34:13 *WARN ipc.Client: Exception encountered while
> connecting to the server* :
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
> token (HDFS_DELEGATION_TOKEN token 3104414 for sssssllllllll) is expired
>
> 15/12/29 11:34:13 *WARN security.UserGroupInformation:
> PriviledgedActionException as:sssssllllllll (auth:SIMPLE) *
> cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):*
> token (HDFS_DELEGATION_TOKEN token 3104414 <3104414> for sssssllllllll) is
> expired*
>
> 15/12/29 11:34:13 *WARN hdfs.LeaseRenewer: Failed to renew lease for
> [DFSClient_NONMAPREDUCE_1297494905_1] for 30 seconds.  Will retry shortly
> ...*
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
> token (HDFS_DELEGATION_TOKEN token 3104414 for sssssllllllll) is expired
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>
>         at com.sun.proxy.$Proxy14.renewLease(Unknown Source)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:571)
>
>         at sun.reflect.GeneratedMethodAccessor122.invoke(Unknown Source)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:606)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
>         at com.sun.proxy.$Proxy15.renewLease(Unknown Source)
>
>  at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:878)
>
>         at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
>
>         at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
>
>         at
> org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
>
>         at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
>
>         at java.lang.Thread.run(Thread.java:745)
>
> 15/12/29 11:34:14* WARN security.UserGroupInformation:
> PriviledgedActionException as:sssssllllllll (auth:SIMPLE) 
> *cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
> token (HDFS_DELEGATION_TOKEN token 3104414 for sssssllllllll) is expired
>
> 15/12/29 11:34:14 WARN ipc.Client: Exception encountered while connecting
> to the server :
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
> token (HDFS_DELEGATION_TOKEN token 3104414 for sssssllllllll) is expired
>
> 15/12/29 11:34:14 WARN security.UserGroupInformation:
> PriviledgedActionException as:sssssllllllll (auth:SIMPLE)
> cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
> token (HDFS_DELEGATION_TOKEN token 3104414 for sssssllllllll) is expired
>
> 15/12/29 11:34:14 WARN hdfs.LeaseRenewer: Failed to renew lease for
> [DFSClient_NONMAPREDUCE_1297494905_1] for 31 seconds.  Will retry shortly
> ...
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
> token (HDFS_DELEGATION_TOKEN token 3104414 for sssssllllllll) is expired
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>
>         at com.sun.proxy.$Proxy14.renewLease(Unknown Source)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:571)
>
>         at sun.reflect.GeneratedMethodAccessor122.invoke(Unknown Source)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:606)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
>         at com.sun.proxy.$Proxy15.renewLease(Unknown Source)
>
>         at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:878)
>
>         at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
>
>         at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
>
>         at
> org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
>
>         at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
>
>         at java.lang.Thread.run(Thread.java:745)
>
> 15/12/29 11:34:15 WARN security.UserGroupInformation:
> PriviledgedActionException as:sssssllllllll (auth:SIMPLE)
> cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
> token (HDFS_DELEGATION_TOKEN token 3104414 for sssssllllllll) is expired
>
> 15/12/29 11:34:15 WARN ipc.Client: Exception encountered while connecting
> to the server :
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
> token (HDFS_DELEGATION_TOKEN token 3104414 for sssssllllllll) is expired
>
> 15/12/29 11:34:15 WARN security.UserGroupInformation:
> PriviledgedActionException as:sssssllllllll (auth:SIMPLE)
> cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
> token (HDFS_DELEGATION_TOKEN token 3104414 for sssssllllllll) is expired
>
> 15/12/29 11:34:15 WARN hdfs.LeaseRenewer: Failed to renew lease for
> [DFSClient_NONMAPREDUCE_1297494905_1] for 32 seconds.  Will retry shortly
> ...
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
> token (HDFS_DELEGATION_TOKEN token 3104414 for sssssllllllll) is expired
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>
>         at com.sun.proxy.$Proxy14.renewLease(Unknown Source)
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:571)
>
>         at sun.reflect.GeneratedMethodAccessor122.invoke(Unknown Source)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:606)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
>         at com.sun.proxy.$Proxy15.renewLease(Unknown Source)
>
>         at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:878)
>
>         at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
>
>
>
> *ERROR LINES:*
>
>
>
> 16/01/05 13:14:10 INFO scheduler.ReceivedBlockTracker: Deleting batches 
> ArrayBuffer(1452021230000 ms)*16/01/05 13:14:10 ERROR 
> scheduler.LiveListenerBus: Listener EventLoggingListener threw an exception*
> java.lang.reflect.InvocationTargetException
>       at sun.reflect.GeneratedMethodAccessor102.invoke(Unknown Source)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at 
> org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:144)
>       at 
> org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:144)
>       at scala.Option.foreach(Option.scala:236)
>       at 
> org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:144)
>       at 
> org.apache.spark.scheduler.EventLoggingListener.onUnpersistRDD(EventLoggingListener.scala:175)
>       at 
> org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:50)
>       at 
> org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
>       at 
> org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
>       at 
> org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:53)
>       at 
> org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:36)
>       at 
> org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:76)
>       at 
> org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply(AsynchronousListenerBus.scala:61)
>       at 
> org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply(AsynchronousListenerBus.scala:61)
>       at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
>       at 
> org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:60)*Caused
>  by: java.io.IOException: Lease timeout of 0 seconds expired.*
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream.abort(DFSOutputStream.java:2192)
>       at 
> org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:935)
>       at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:889)
>       at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
>       at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
>       at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
>       at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
>       at java.lang.Thread.run(Thread.java:745)
>
>
>
> Please Guide me.
>
>
> Regards,
> Nik.
>

Reply via email to