Hi,
Yes we are aware of this issue and we would like to have it soon, but at the
moment it does not look like clean shutdown will be ready for Flink 1.5.
Another solution is Kafka exactly-once producer implemented on top of the
GenericWriteAheadSink. It could avoid this issue (at a cost of sign
I am re-upping this thread now that FlinkKafkaProducer011 is out. The new
producer, when used with the exactly once semantics, has the rather
troublesome behavior that it will fallback to at-most-once, rather than
at-least-once, if the job is down for longer than the Kafka broker's
transaction.max
I would propose implementations of NewSource to be not blocking/asynchronous.
For example something like
public abstract Future getCurrent();
Which would allow us to perform some certain actions while there are no data
available to process (for example flush output buffers). Something like this
@Eron Yes, that would be the difference in characterisation. I think
technically all sources could be transformed by that by pushing data into a
(blocking) queue and having the "getElement()" method pull from that.
> On 15. Sep 2017, at 20:17, Elias Levy wrote:
>
> On Fri, Sep 15, 2017 at 10:0
On Fri, Sep 15, 2017 at 10:02 AM, Eron Wright wrote:
> Aljoscha, would it be correct to characterize your idea as a 'pull' source
> rather than the current 'push'? It would be interesting to look at the
> existing connectors to see how hard it would be to reverse their
> orientation. e.g. the
Aljoscha, would it be correct to characterize your idea as a 'pull' source
rather than the current 'push'? It would be interesting to look at the
existing connectors to see how hard it would be to reverse their
orientation. e.g. the source might require a buffer pool.
On Fri, Sep 15, 2017 at 9:
Also relevant for this discussion: Several people (including me) by now were
floating the idea of reworking the source interface to take away the
responsibility of stopping/canceling/continuing from a specific source
implementation and to instead give that power to the system. Currently each
so
I too am curious about stop vs cancel. I'm trying to understand the
motivations a bit more.
The current behavior of stop is basically that the sources become bounded,
leading to the job winding down.
The interesting question is how best to support 'planned' maintenance
procedures such as app upg
Hey Elias,
sorry for the delay here. No, stop is not deprecated but not fully
implemented yet. One missing part is migration of the existing source
functions as you say.
Let me pull in Till for more details on this. @Till: Is there more
missing than migrating the sources?
Here is the PR and disc
Anyone?
On Mon, Sep 11, 2017 at 6:17 PM, Elias Levy
wrote:
> I was wondering about the status of the flink stop command. At first
> blush it would seem as the preferable way to shutdown a Flink job, but it
> depends on StoppableFunction being implemented by sources and I notice that
> the Kafka
I was wondering about the status of the flink stop command. At first blush
it would seem as the preferable way to shutdown a Flink job, but it depends
on StoppableFunction being implemented by sources and I notice that the
Kafka source does not seem to implement it. In addition, the command does
11 matches
Mail list logo