Hi,
I'm having a question regarding Flink.
I'm running Flink in stand-alone mode on 1 host (JobManager, TaskManager on
the same host). At first, I'm able to submit and cancel jobs normally, the
jobs showed up in the web UI and ran.
However, after ~1month, when I canceled the old job and submitting
Hello,
I have a topic in Kafka that Flink reads from. I parse and write messages
in this topic to BigQuery using streaming insert in batch of 500 messages
using in CountWindow in Flink.
*Problem*: I want to commit manually only when a batch was written
successfully to Bigquery.
*Reason:*
I saw t
;
> In both cases, you might write the same record to the BigQuery twice.
>
> If in doubt if your sink fulfills the criteria above, feel free to share
> it.
>
> Cheers,
>
> Konstantin
>
>
>
> On Mon, Mar 25, 2019 at 7:50 AM Son Mai wrote:
>
>> Hello,
&
Hi all,
when I create new classes extending ProcessFunction or implementing
WindowFunction, there is a *Collector out* for output.
How is this output processed in the next stage, for example a Sink or
another WindowAssigner? Is it processed immediately by the next operator by
push mechanism, or i
s document, it collects a record and forwards it.
> The collector is the "push" counterpart of the Iterator which "pulls" data
> in.
>
> Best,
> tison.
>
>
> Son Mai 于2019年4月4日周四 上午10:15写道:
>
>> Hi all,
>>
>> when I create new