hat are in myRDD, so
>>>>> the reduceByKey after union keeps the overall tuple count in myRDD
>>>>> fixed. Or even with fixed tuple count, it will keep consuming more
>>>>> resources?
>>>>>
>>>>> On 9 July 201
batch. Obviously this is going to not last for very long. You
>>>>> fundamentally cannot keep processing ever increasing amount of data with
>>>>> finite resources, isnt it?
>>>>>
>>>>> On Thu, Jul 9, 2015 at 3:17 A
;> wrote:
>>>>
>>>>> Thats from the Streaming tab for Spark 1.4 WebUI.
>>>>>
>>>>> On 9 July 2015 at 15:35, Michel Hubert wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>>
>>>>>
;>>>
>>>>> Hi,
>>>>>
>>>>>
>>>>>
>>>>> I was just wondering how you generated to second image with the charts.
>>>>>
>>>>> What product?
>>>>>
>>>>>
&
the Streaming tab for Spark 1.4 WebUI.
>>>
>>> On 9 July 2015 at 15:35, Michel Hubert wrote:
>>>
>>>> Hi,
>>>>
>>>>
>>>>
>>>> I was just wondering how you generated to second image with the charts.
>>>>
ow you generated to second image with the charts.
>>>
>>> What product?
>>>
>>>
>>>
>>> *From:* Anand Nalya [mailto:anand.na...@gmail.com]
>>> *Sent:* donderdag 9 juli 2015 11:48
>>> *To:* spark users
>>> *Subject:* Breaking
image with the charts.
>>
>> What product?
>>
>>
>>
>> *From:* Anand Nalya [mailto:anand.na...@gmail.com]
>> *Sent:* donderdag 9 juli 2015 11:48
>> *To:* spark users
>> *Subject:* Breaking lineage and reducing stages in Spark Streaming
>>
>>
>
gt; *Sent:* donderdag 9 juli 2015 11:48
> *To:* spark users
> *Subject:* Breaking lineage and reducing stages in Spark Streaming
>
>
>
> Hi,
>
>
>
> I've an application in which an rdd is being updated with tuples coming
> from RDDs in a DStream with follow
Hi,
I was just wondering how you generated to second image with the charts.
What product?
From: Anand Nalya [mailto:anand.na...@gmail.com]
Sent: donderdag 9 juli 2015 11:48
To: spark users
Subject: Breaking lineage and reducing stages in Spark Streaming
Hi,
I've an application in which a
Hi,
I've an application in which an rdd is being updated with tuples coming
from RDDs in a DStream with following pattern.
dstream.foreachRDD(rdd => {
myRDD = myRDD.union(rdd.filter(myfilter)).reduceByKey(_+_)
})
I'm using cache() and checkpointin to cache results. Over the time, the
lineage o
10 matches
Mail list logo