PM
To: Jahagirdar, Madhu
Cc: Akhil Das; user
Subject: Re: Dstream Transformations
>From the Spark Streaming Programming Guide
>(http://spark.apache.org/docs/latest/streaming-programming-guide.html#failure-of-a-worker-node):
...output operations (like foreachRDD) have at-least once semantics, th
t; Regards,
> Madhu Jahagirdar
>
> --
> *From:* Akhil Das [ak...@sigmoidanalytics.com]
> *Sent:* Monday, October 06, 2014 1:20 PM
> *To:* Jahagirdar, Madhu
> *Cc:* user
> *Subject:* Re: Dstream Transformations
>
>AFAIK spark doesn't restart worker nodes it
From: Akhil Das [ak...@sigmoidanalytics.com]
Sent: Monday, October 06, 2014 1:20 PM
To: Jahagirdar, Madhu
Cc: user
Subject: Re: Dstream Transformations
AFAIK spark doesn't restart worker nodes itself. You can have multiple worker
nodes and in that case if one worker node goes down, then spark
AFAIK spark doesn't restart worker nodes itself. You can have multiple
worker nodes and in that case if one worker node goes down, then spark will
try to recompute those lost RDDs again with those workers who are alive.
Thanks
Best Regards
On Sun, Oct 5, 2014 at 5:19 AM, Jahagirdar, Madhu <
madhu