Re: [SURVEY] Is the default restart delay of 0s causing problems?

2019-09-03 Thread Till Rohrmann
Thanks everyone for the input again. I'll then conclude this survey thread and start a discuss thread to set the default restart delay to 1s. @Arvid, I agree that a better documentation how to tune Flink with sane settings for certain scenarios is super helpful. However, as you've said it is somew

Re: [SURVEY] Is the default restart delay of 0s causing problems?

2019-09-03 Thread Till Rohrmann
The FLIP-62 discuss thread can be found here [1]. [1] https://lists.apache.org/thread.html/9602b342602a0181fcb618581f3b12e692ed2fad98c59fd6c1caeabd@%3Cdev.flink.apache.org%3E Cheers, Till On Tue, Sep 3, 2019 at 11:13 AM Till Rohrmann wrote: > Thanks everyone for the input again. I'll then conc

error in my job

2019-09-03 Thread yuvraj singh
Hi all , i am facing a problem in my flink job , i am getting given exception 2019-09-03 12:02:04,278 ERROR org.apache.flink.runtime.io.network.netty.PartitionRequestQueue - Encountered error while consuming partitions java.io.IOException: Connection reset by peer at sun.nio.ch.FileDi

Re:question

2019-09-03 Thread JingsongLee
should be schema.field(“msg”, Types.ROW(...))? And you should select msg.f1 from table. Best Jingsong Lee 来自阿里邮箱 iPhone版 --Original Mail -- From:圣眼之翼 <2463...@qq.com> Date:2019-09-03 09:22:41 Recipient:user Subject:question How do you do: My problem is flink t

??????question

2019-09-03 Thread ????????
thank you?? Let me try?? -- -- ??: "JingsongLee"; : 2019??9??3??(??) 7:53 ??: ""<2463...@qq.com>;"user"; : Re:question should be schema.field(??msg??, Types.ROW(...))?And you should select msg.f1 from table. Best

Re: [ANNOUNCE] Kinesis connector becomes part of Flink releases

2019-09-03 Thread vino yang
Good news! Thanks for your efforts, Bowen! Best, Vino Yu Li 于2019年9月2日周一 上午6:04写道: > Great to know, thanks for the efforts Bowen! > > And I believe it worth a release note in the original JIRA, wdyt? Thanks. > > Best Regards, > Yu > > > On Sat, 31 Aug 2019 at 11:01, Bowen Li wrote: > >> Hi all

Re: [SURVEY] How do you use high-availability services in Flink?

2019-09-03 Thread Aleksandar Mastilovic
Hi Zili, Sorry for replying late, we had a holiday here in the US. We are using the high-availability.storageDir but only for the Blob store, however job graphs, checkpoints and checkpoint IDs are stored in MapDB. > On Aug 28, 2019, at 7:48 PM, Zili Chen wrote: > > Thanks for your email Alek

??????question

2019-09-03 Thread ????????
I want to output the query results to kafka, json format is as follows?? { "id": "123", "serial": "6b0c2d26", "msg": { "f1": "5677" } } How to define the format and schema of kafka sink?? thanks! -- -- ??: ""<2463...@qq.com>; ??

Re: Kafka consumer behavior with same group Id, 2 instances of apps in separate cluster?

2019-09-03 Thread Ashish Pokharel
Thanks Becket, Sorry for delayed response. That’s what I thought as well. I built a hacky custom source today directly using Kafka client which was able to join consumer group etc. which works as I expected but not sure about production readiness for something like that :) The need arises beca

Re: Kafka consumer behavior with same group Id, 2 instances of apps in separate cluster?

2019-09-03 Thread Becket Qin
Thanks for the explanation Ashish. Glad you made it work with custom source. I guess your application is probably stateless. If so, another option might be having a geo-distributed Flink deployment. That means there will be TM in different datacenter to form a single Flink cluster. This will also

??????question

2019-09-03 Thread ????????
I have found a way?? select row(msg.f1) from table. -- -- ??: ""<2463...@qq.com>; : 2019??9??4??(??) 10:57 ??: ""<2463...@qq.com>;"JingsongLee";"user"; : ??question I want to output the quer

How to calculate one day's uv every minute by SQL

2019-09-03 Thread 刘建刚
We want to calculate one day's uv and show the result every minute . We have implemented this by java code: dataStream.keyBy(dimension) .incrementWindow(Time.days(1), Time.minutes(1)) .uv(userId) The input data is big. So we use Va