Hi, did you managed to get it working?
And do you how this works on spark 1.3?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-Distribute-custom-receivers-evenly-across-excecutors-tp6671p24724.html
Sent from the Apache Spark User List
The receivers are submitted as tasks. They are supposed to be assigned
to the executors in a round-robin manner by
TaskSchedulerImpl.resourceOffers(). However, sometimes not all the
executors are registered when the receivers are submitted. That's why
the receivers fill up the registered executors
Dear All,
I'm running Spark Streaming (1.0.0) with Yarn (2.2.0) on a 10-node cluster.
I setup 10 custom receivers to hear from 10 data streams. I want one
receiver per node in order to maximize the network bandwidth. However, if I
set "--executor-cores 4", the 10 receivers only run on 3 of the nod