Re: Flink Batch Processing

2020-09-29 Thread Timo Walther
Hi Sunitha, currently, not every connector can be mixed with every API. I agree that it is confusing from time to time. The HBase connector is an InputFormat. DataSet, DataStream and Table API can work with InputFormats. The current Hbase input format might work best with Table API. If you li

Re: Flink Batch Processing

2020-09-29 Thread Till Rohrmann
Hi Sunitha, here is some documentation about how to use the Hbase sink with Flink [1, 2]. [1] https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/connectors/hbase.html [2] https://docs.cloudera.com/csa/1.2.0/datastream-connectors/topics/csa-hbase-connector.html Cheers, Till On Tue,

Re: Flink Batch Processing

2020-09-29 Thread s_penakalap...@yahoo.com
Hi Piotrek, Thank you for the reply. Flink changes are good, However Flink is changing so much that we are unable to get any good implementation examples either on Flink documents or any other website. Using HBaseInputFormat I was able to read the data as a DataSet<>, now I see that DataSet wou

Re: Flink Batch Processing

2020-09-28 Thread Piotr Nowojski
Hi Sunitha, First and foremost, the DataSet API will be deprecated soon [1] so I would suggest trying to migrate to the DataStream API. When using the DataStream API it doesn't mean that you can not work with bounded inputs - you can. Flink SQL (Blink planner) is in fact using DataStream API to ex

Re: Flink batch processing fault tolerance

2017-02-17 Thread Aljoscha Krettek
ready focusing on realizing the ideas mentioned in FLIP1, > wish to contirbute to flink in months. > > Best, > > Zhijiang > > -- > 发件人:Si-li Liu > 发送时间:2017年2月17日(星期五) 11:22 > 收件人:user > 主 题:Re: Flink ba

Re: Flink batch processing fault tolerance

2017-02-16 Thread Si-li Liu
a Krettek [mailto:aljos...@apache.org] >> *Sent:* Thursday, February 16, 2017 2:48 PM >> *To:* user@flink.apache.org >> *Subject:* Re: Flink batch processing fault tolerance >> >> >> >> Hi, >> >> yes, this is indeed true. We had some plans for

Re: Flink batch processing fault tolerance

2017-02-16 Thread Renjie Liu
gt; > > > > > *From:* Aljoscha Krettek [mailto:aljos...@apache.org] > *Sent:* Thursday, February 16, 2017 2:48 PM > *To:* user@flink.apache.org > *Subject:* Re: Flink batch processing fault tolerance > > > > Hi, > > yes, this is indeed true. We had some plans fo

RE: Flink batch processing fault tolerance

2017-02-16 Thread Anton Solovev
Hi Aljoscha, Could you share your plans of resolving it? Best, Anton From: Aljoscha Krettek [mailto:aljos...@apache.org] Sent: Thursday, February 16, 2017 2:48 PM To: user@flink.apache.org Subject: Re: Flink batch processing fault tolerance Hi, yes, this is indeed true. We had some plans for

Re: Flink batch processing fault tolerance

2017-02-16 Thread Aljoscha Krettek
Hi, yes, this is indeed true. We had some plans for how to resolve this but they never materialised because of the focus on Stream Processing. We might unite the two in the future and then you will get fault-tolerant batch/stream processing in the same API. Best, Aljoscha On Wed, 15 Feb 2017 at 0

Re: Flink Batch Processing with Kafka

2016-08-03 Thread Prabhu V
If your environment is not kerberized (or if you can offord to restart the job every 7 days), a checkpoint enabled, flink job with windowing and the count trigger, would be ideal for your requirement. Check the api's on flink windows. I had something like this that worked stream.keyBy(0).countW