@Tathagata Das so basically you are saying it is supported out of the box,
but we should expect a significant performance hit - is that right?



On Tue, Feb 24, 2015 at 5:37 AM, Tathagata Das <[email protected]> wrote:

> The default persistence level is MEMORY_AND_DISK, so the LRU policy would
> discard the blocks to disk, so the streaming app will not fail. However,
> since things will get constantly read in and out of disk as windows are
> processed, the performance wont be great. So it is best to have sufficient
> memory to keep all the window data in memory.
>
> TD
>
> On Mon, Feb 23, 2015 at 8:26 AM, Shao, Saisai <[email protected]>
> wrote:
>
>> I don't think current Spark Streaming supports window operations which
>> beyond its available memory, internally Spark Streaming puts all the data
>> in the memory belongs to the effective window, if the memory is not enough,
>> BlockManager will discard the blocks at LRU policy, so something unexpected
>> will be occurred.
>>
>> Thanks
>> Jerry
>>
>> -----Original Message-----
>> From: avilevi3 [mailto:[email protected]]
>> Sent: Monday, February 23, 2015 12:57 AM
>> To: [email protected]
>> Subject: spark streaming window operations on a large window size
>>
>> Hi guys,
>>
>> does spark streaming supports window operations on a sliding window that
>> is data is larger than the available memory?
>> we would like to
>> currently we are using kafka as input, but we could change that if needed.
>>
>> thanks
>> Avi
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/spark-streaming-window-operations-on-a-large-window-size-tp21764.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [email protected] For additional
>> commands, e-mail: [email protected]
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [email protected]
>> For additional commands, e-mail: [email protected]
>>
>>
>

Reply via email to