This seems a bug, right? It's not the user's responsibility to manage the
workers.
On Wed, Aug 13, 2014 at 11:28 AM, S. Zhou wrote:
> Sometimes workers are dead but spark context does not know it and still
> send jobs.
>
>
> On Tuesday, August 12, 2014 7:14
RDD don't *need* replication; but it doesn't harm if the underlying things
has replication.
On Mon, Aug 4, 2014 at 5:51 PM, Deep Pradhan
wrote:
> Hi,
> Spark can run on top of HDFS.
> While Spark talks about the RDDs which do not need replication because the
> partitions can be built with the h