I am running a spark streaming stand-alone cluster, connected to rabbitmq
endpoint(s). The application will run for 20-30 minutes before failing with
the following error:

WARN 2015-04-01 21:00:53,944
org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
RDD 22 - Ask timed out on
[Actor[akka.tcp://sparkExecutor@10.1.242.221:43018/user/BlockManagerActor1#-1913092216]]
after [30000 ms]}
WARN 2015-04-01 21:00:53,944
org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
RDD 23 - Ask timed out on
[Actor[akka.tcp://sparkExecutor@10.1.242.221:43018/user/BlockManagerActor1#-1913092216]]
after [30000 ms]}
WARN 2015-04-01 21:00:53,951
org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
RDD 20 - Ask timed out on
[Actor[akka.tcp://sparkExecutor@10.1.242.221:43018/user/BlockManagerActor1#-1913092216]]
after [30000 ms]}
WARN 2015-04-01 21:00:53,951
org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
RDD 19 - Ask timed out on
[Actor[akka.tcp://sparkExecutor@10.1.242.221:43018/user/BlockManagerActor1#-1913092216]]
after [30000 ms]}
WARN 2015-04-01 21:00:53,952
org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
RDD 18 - Ask timed out on
[Actor[akka.tcp://sparkExecutor@10.1.242.221:43018/user/BlockManagerActor1#-1913092216]]
after [30000 ms]}
WARN 2015-04-01 21:00:53,952
org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
RDD 17 - Ask timed out on
[Actor[akka.tcp://sparkExecutor@10.1.242.221:43018/user/BlockManagerActor1#-1913092216]]
after [30000 ms]}
WARN 2015-04-01 21:00:53,952
org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
RDD 16 - Ask timed out on
[Actor[akka.tcp://sparkExecutor@10.1.242.221:43018/user/BlockManagerActor1#-1913092216]]
after [30000 ms]}
WARN 2015-04-01 21:00:54,151
org.apache.spark.streaming.scheduler.ReceiverTracker.logWarning.71: Error
reported by receiver for stream 0: Error in block pushing thread -
java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
    at
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
    at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
    at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
    at scala.concurrent.Await$.result(package.scala:107)
    at
org.apache.spark.streaming.receiver.ReceiverSupervisorImpl.pushAndReportBlock(ReceiverSupervisorImpl.scala:166)
    at
org.apache.spark.streaming.receiver.ReceiverSupervisorImpl.pushArrayBuffer(ReceiverSupervisorImpl.scala:127)
    at
org.apache.spark.streaming.receiver.ReceiverSupervisorImpl$$anon$2.onPushBlock(ReceiverSupervisorImpl.scala:112)
    at
org.apache.spark.streaming.receiver.BlockGenerator.pushBlock(BlockGenerator.scala:182)
    at
org.apache.spark.streaming.receiver.BlockGenerator.org$apache$spark$streaming$receiver$BlockGenerator$$keepPushingBlocks(BlockGenerator.scala:155)
    at
org.apache.spark.streaming.receiver.BlockGenerator$$anon$1.run(BlockGenerator.scala:87)


Has anyone run into this before? 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-Error-in-block-pushing-thread-tp22356.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to