I am running a standalone Spark streaming cluster, connected to multiple
RabbitMQ endpoints. The application will run for 20-30 minutes before
raising the following error:

WARN 2015-04-01 21:00:53,944
> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
> RDD 22 - Ask timed out on [Actor[
> akka.tcp://sparkExecutor@10.1.242.221:43018/user/BlockManagerActor1#-1913092216]]
> after [30000 ms]}
> WARN 2015-04-01 21:00:53,944
> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
> RDD 23 - Ask timed out on [Actor[
> akka.tcp://sparkExecutor@10.1.242.221:43018/user/BlockManagerActor1#-1913092216]]
> after [30000 ms]}
> WARN 2015-04-01 21:00:53,951
> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
> RDD 20 - Ask timed out on [Actor[
> akka.tcp://sparkExecutor@10.1.242.221:43018/user/BlockManagerActor1#-1913092216]]
> after [30000 ms]}
> WARN 2015-04-01 21:00:53,951
> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
> RDD 19 - Ask timed out on [Actor[
> akka.tcp://sparkExecutor@10.1.242.221:43018/user/BlockManagerActor1#-1913092216]]
> after [30000 ms]}
> WARN 2015-04-01 21:00:53,952
> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
> RDD 18 - Ask timed out on [Actor[
> akka.tcp://sparkExecutor@10.1.242.221:43018/user/BlockManagerActor1#-1913092216]]
> after [30000 ms]}
> WARN 2015-04-01 21:00:53,952
> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
> RDD 17 - Ask timed out on [Actor[
> akka.tcp://sparkExecutor@10.1.242.221:43018/user/BlockManagerActor1#-1913092216]]
> after [30000 ms]}
> WARN 2015-04-01 21:00:53,952
> org.apache.spark.storage.BlockManagerMaster.logWarning.71: Failed to remove
> RDD 16 - Ask timed out on [Actor[
> akka.tcp://sparkExecutor@10.1.242.221:43018/user/BlockManagerActor1#-1913092216]]
> after [30000 ms]}
> WARN 2015-04-01 21:00:54,151
> org.apache.spark.streaming.scheduler.ReceiverTracker.logWarning.71: Error
> reported by receiver for stream 0: Error in block pushing thread -
> java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
>     at
> scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>     at
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>     at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>     at
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>     at scala.concurrent.Await$.result(package.scala:107)
>     at
> org.apache.spark.streaming.receiver.ReceiverSupervisorImpl.pushAndReportBlock(ReceiverSupervisorImpl.scala:166)
>     at
> org.apache.spark.streaming.receiver.ReceiverSupervisorImpl.pushArrayBuffer(ReceiverSupervisorImpl.scala:127)
>     at
> org.apache.spark.streaming.receiver.ReceiverSupervisorImpl$$anon$2.onPushBlock(ReceiverSupervisorImpl.scala:112)
>     at
> org.apache.spark.streaming.receiver.BlockGenerator.pushBlock(BlockGenerator.scala:182)
>     at org.apache.spark.streaming.receiver.BlockGenerator.org
> <http://org.apache.spark.streaming.receiver.blockgenerator.org/>
> $apache$spark$streaming$receiver$BlockGenerator$$keepPushingBlocks(BlockGenerator.scala:155)
>     at
> org.apache.spark.streaming.receiver.BlockGenerator$$anon$1.run(BlockGenerator.scala:87)


Has anyone run into this before?

--
Bill Young
Threat Stack | Infrastructure Engineer
http://www.threatstack.com

Reply via email to