Hi all,
We're running Spark 1.0 on CDH 5.1.2. We're using Spark in YARN-client
mode.
We're seeing that one of our nodes is not being assigned any tasks, and no
resources (RAM,cpu) are being used on this node. In the CM UI this worker
node is in good health and the spark Worker process is runnin
ail.jtp?type=node&node=6134&i=1>
>> > wrote:
>>
>>> This is very likely because the serialized map output locations buffer
>>> exceeds the akka frame size. Please try setting "spark.akka.frameSize"
>>> (default 10 MB) to some hi
k.akka.frameSize"
> (default 10 MB) to some higher number, like 64 or 128.
>
> In the newest version of Spark, this would throw a better error, for what
> it's worth.
>
>
>
> On Mon, May 19, 2014 at 8:39 PM, jonathan.keebler <[hidden
> email]<http://user
Has anyone observed Spark worker threads stalling during a shuffle phase with
the following message (one per worker host) being echoed to the terminal on
the driver thread?
INFO spark.MapOutputTrackerActor: Asked to send map output locations for
shuffle 0 to [worker host]...
At this point Spark-