Hi Dong,

I have a couple of questions.

Could you explain why those properties

    @Nullable private Boolean isOutputOnEOF = null;
    @Nullable private Boolean isOutputOnCheckpoint = null;
    @Nullable private Boolean isInternalSorterSupported = null;

must be `@Nullable`, instead of having the default value set to `false`?

Second question, have you thought about cases where someone is
either bootstrapping from a streaming source like Kafka
or simply trying to catch up after a long period of downtime in a purely
streaming job? Generally speaking a cases where
user doesn't care about latency in the catch up phase, regardless if the
source is bounded or unbounded, but wants to process
the data as fast as possible, and then switch dynamically to real time
processing?

Best,
Piotrek

niedz., 2 lip 2023 o 16:15 Dong Lin <lindon...@gmail.com> napisaƂ(a):

> Hi all,
>
> I am opening this thread to discuss FLIP-327: Support stream-batch unified
> operator to improve job throughput when processing backlog data. The design
> doc can be found at
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-327%3A+Support+stream-batch+unified+operator+to+improve+job+throughput+when+processing+backlog+data
> .
>
> This FLIP enables a Flink job to initially operate in batch mode, achieving
> high throughput while processing records that do not require low processing
> latency. Subsequently, the job can seamlessly transition to stream mode for
> processing real-time records with low latency. Importantly, the same state
> can be utilized before and after this mode switch, making it particularly
> valuable when users wish to bootstrap the job's state using historical
> data.
>
> We would greatly appreciate any comments or feedback you may have on this
> proposal.
>
> Cheers,
> Dong
>

Reply via email to