I stumbled upon zipWithUniqueId/zipWithIndex. Is this what you are looking
for?
https://spark.apache.org/docs/latest/api/java/org/apache/spark/api/java/JavaRDDLike.html#zipWithUniqueId()
On 22 June 2015 at 06:16, Michal Čizmazia wrote:
> If I am not mistaken, one way to see the accumulators is
If I am not mistaken, one way to see the accumulators is that they are just
write-only for the workers and their value can be read by the driver.
Therefore they cannot be used for ID generation as you wish.
On 22 June 2015 at 04:30, anshu shukla wrote:
> But i just want to update rdd , by appen
But i just want to update rdd , by appending unique message ID with
each element of RDD , which should be automatically(m++ ..) updated every
time a new element comes to rdd .
On Mon, Jun 22, 2015 at 7:05 AM, Michal Čizmazia wrote:
> StreamingContext.sparkContext()
>
> On 21 June 2015 at 21
StreamingContext.sparkContext()
On 21 June 2015 at 21:32, Will Briggs wrote:
> It sounds like accumulators are not necessary in Spark Streaming - see
> this post (
> http://apache-spark-user-list.1001560.n3.nabble.com/Shared-variable-in-Spark-Streaming-td11762.html)
> for more details.
>
>
> On
It sounds like accumulators are not necessary in Spark Streaming - see this
post (
http://apache-spark-user-list.1001560.n3.nabble.com/Shared-variable-in-Spark-Streaming-td11762.html)
for more details.
On June 21, 2015, at 7:31 PM, anshu shukla wrote:
In spark Streaming ,Since we are already
In spark Streaming ,Since we are already having Streaming context , which
does not allows us to have accumulators .We have to get sparkContext for
initializing accumulator value .
But having 2 spark context will not serve the problem .
Please Help !!
--
Thanks & Regards,
Anshu Shukla