Hi Michal,

If you use https://github.com/dibbhatt/kafka-spark-consumer  , it comes
with int own built-in back pressure mechanism. By default this is disabled,
you need to enable it to use this feature with this consumer. It does
control the rate based on Scheduling Delay at runtime..

Regards,
Dibyendu

On Wed, Sep 16, 2015 at 12:32 PM, Akhil Das <ak...@sigmoidanalytics.com>
wrote:

> I had a workaround for exactly the same scenario
> http://apache-spark-developers-list.1001551.n3.nabble.com/SparkStreaming-Workaround-for-BlockNotFound-Exceptions-td12096.html
>
> Apart from that, if you are using this consumer
> https://github.com/dibbhatt/kafka-spark-consumer it also has a built-in
> rate limiting, Also in Spark 1.5.0 they have a rate limiting/back-pressure
> (haven't tested it on production though).
>
>
>
> Thanks
> Best Regards
>
> On Tue, Sep 15, 2015 at 11:56 PM, Michal Čizmazia <mici...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I have a Reliable Custom Receiver storing messages into Spark. Is there
>> way how to prevent my receiver from storing more messages into Spark when
>> the Scheduling Delay reaches a certain threshold?
>>
>> Possible approaches:
>> #1 Does Spark block on the Receiver.store(messages) call to prevent
>> storing more messages and overflowing the system?
>> #2 How to obtain the Scheduling Delay in the Custom Receiver, so that I
>> can implement the feature.
>>
>> Thanks,
>>
>> Mike
>>
>>
>

Reply via email to