Hi Hanan,

Sometimes, the behavior depends on your implementation.

Since it's not a built-in connector, it would be better to share your
customized source with the community
so that the community would be better to help you figure out where is the
problem.

WDYT?

Best,
Vino

Hanan Yehudai <hanan.yehu...@radcom.com> 于2019年11月26日周二 下午12:27写道:

> HI ,  I am trying to do some performance test to my flink deployment.
>
> I am implementing an extremely simplistic use case
>
> I built a ZMQ Source
>
>
>
> The topology is ZMQ Source - > Mapper- > DIscardingSInk ( a sink that does
> nothing )
>
>
>
> Data is pushed via ZMQ at a very high rate.
>
> When the incoming  rate from ZMQ is higher then the rate flink can keep up
> with,  I can see that the JVM Heap is filling up  ( using Flinks metrics ) .
> when the heap is fullt consumes – TM chokes , a HeartBeat is missed  and
> the job fails.
>
>
>
> I was expecting Flink to handle this type of backpressure gracefully and
> not
>
>
>
> Note :  The mapper has not state to persist
>
> See below the Grafana charts,  on the left  is the TM HHEAP  Used.
>
> On the right – the ZMQ – out of flink. ( yellow ) Vs Flink consuming rate
> from reported by ZMQSOurce
>
> 1GB is the configured heap size
>
>
>
>

Reply via email to