Hi
I have a question, upsert kafka sink why not send all data that
contain update_before,delete. if send all data, the source dot't need
add state to recover data,
and the state may be very lage.
Best
ocean.shi
t;
> Best,
> Congxian
>
>
> Jocean shi 于2019年12月24日周二 下午3:05写道:
>
> > Hi Jark,
> >
> > I got you. We have discussed this question in Flink Forward 2019.
> > I know that i can custom operator to resolve this problem.
> > but also has some other problems:
> &g
erface.
> The TwoInputStreamOperator interface provides "processWatermark1" and
> "processWatermark2" which handles
> watermarks for left stream and right stream. You can then ignore the
> watermarks from right stream and forward
> watermark from le
Hi all:
Currently, The "TwoInputStreamOperator" such as
"CoBroadcastWithKeyedOperator" "KeyedCoProcessOperator" and the
(Co)stream such as "ConnectedStreams" "BroadcastConnectedStream" only
support compute watermark by two stream.
but we just need one stream to compute watermark in some case.
Thanks
Thomas Weise 于2019年3月23日周六 下午12:33写道:
> Done!
>
>
> On Fri, Mar 22, 2019 at 9:11 PM Jocean shi wrote:
>
> > Hi,
> >
> > I want to contribute to Apache Flink.
> > Would you please give me the contributor permission?
> > My JIRA ID is Jocean.
> >
> > Best
> > Jocean.shi
> >
>
Hi,
I want to contribute to Apache Flink.
Would you please give me the contributor permission?
My JIRA ID is Jocean.
Best
Jocean.shi
Hi all,
I have encountered a error When i want to register a table from kafka
using csv formatter.
The error is "Could not find a suitable table factory for
'org.apache.flink.table.factories.DeserializationSchemaFactory"
Jocean