Recipient:Ardhani Narasimha;
>> sagar
>> Cc:Flink User Mail List
>> Theme:Re: Re: Using Kafka as bounded source with DataStream API in batch
>> mode (Flink 1.12)
>>
>> Hi Sagar,
>>
>> I think the problem is that the legacy source implemented by e
vient.
>
> Best,
> Yun
>
> --
> Sender:Yun Gao
> Date:2021/01/14 15:26:54
> Recipient:Ardhani Narasimha; sagar<
> sagarban...@gmail.com>
> Cc:Flink User Mail List
> Theme:Re: Re: Using Kafka as bounded
--
Sender:Yun Gao
Date:2021/01/14 15:26:54
Recipient:Ardhani Narasimha;
sagar
Cc:Flink User Mail List
Theme:Re: Re: Using Kafka as bounded source with DataStream API in batch mode
(Flink 1.12)
Hi Sagar,
I think the problem is that the legacy source implemented by extending
SourceFunction are
Hi Ardhani,
So whenever I want to run this flink job, I will call the Java API to put
the data to the four different kafka topics, what data to put into kafka
will be coded into those API and then once that is complete, I want to run
the flink job on the available data in the kafka and perform bus
Hi Sagar,
I think the problem is that the legacy source implemented by extending
SourceFunction are all defined as CONTINOUS_UNBOUNDED when use env.addSource().
Although there is hacky way to add the legacy sources as BOUNDED source [1], I
think you may first have a try of new version of
Interesting use case.
Can you please elaborate more on this.
On what criteria do you want to batch? Time? Count? Or Size?
On Thu, 14 Jan 2021 at 12:15 PM, sagar wrote:
> Hi Team,
>
> I am getting the following error while running DataStream API in with
> batch mode with kafka source.
> I am usi