Thanks everyone for the input again. I'll then conclude this survey thread
and start a discuss thread to set the default restart delay to 1s.
@Arvid, I agree that a better documentation how to tune Flink with sane
settings for certain scenarios is super helpful. However, as you've said it
is somew
The FLIP-62 discuss thread can be found here [1].
[1]
https://lists.apache.org/thread.html/9602b342602a0181fcb618581f3b12e692ed2fad98c59fd6c1caeabd@%3Cdev.flink.apache.org%3E
Cheers,
Till
On Tue, Sep 3, 2019 at 11:13 AM Till Rohrmann wrote:
> Thanks everyone for the input again. I'll then conc
Hi all ,
i am facing a problem in my flink job , i am getting given exception
2019-09-03 12:02:04,278 ERROR
org.apache.flink.runtime.io.network.netty.PartitionRequestQueue -
Encountered error while consuming partitions
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDi
should be schema.field(“msg”, Types.ROW(...))?
And you should select msg.f1 from table.
Best
Jingsong Lee
来自阿里邮箱 iPhone版
--Original Mail --
From:圣眼之翼 <2463...@qq.com>
Date:2019-09-03 09:22:41
Recipient:user
Subject:question
How do you do:
My problem is flink t
thank you??
Let me try??
-- --
??: "JingsongLee";
: 2019??9??3??(??) 7:53
??: ""<2463...@qq.com>;"user";
: Re:question
should be schema.field(??msg??, Types.ROW(...))?And you should select msg.f1
from table.
Best
Good news! Thanks for your efforts, Bowen!
Best,
Vino
Yu Li 于2019年9月2日周一 上午6:04写道:
> Great to know, thanks for the efforts Bowen!
>
> And I believe it worth a release note in the original JIRA, wdyt? Thanks.
>
> Best Regards,
> Yu
>
>
> On Sat, 31 Aug 2019 at 11:01, Bowen Li wrote:
>
>> Hi all
Hi Zili,
Sorry for replying late, we had a holiday here in the US.
We are using the high-availability.storageDir but only for the Blob store,
however job graphs, checkpoints and checkpoint IDs are stored in MapDB.
> On Aug 28, 2019, at 7:48 PM, Zili Chen wrote:
>
> Thanks for your email Alek
I want to output the query results to kafka, json format is as follows??
{
"id": "123",
"serial": "6b0c2d26",
"msg": {
"f1": "5677"
}
}
How to define the format and schema of kafka sink??
thanks!
-- --
??: ""<2463...@qq.com>;
??
Thanks Becket,
Sorry for delayed response. That’s what I thought as well. I built a hacky
custom source today directly using Kafka client which was able to join consumer
group etc. which works as I expected but not sure about production readiness
for something like that :)
The need arises beca
Thanks for the explanation Ashish. Glad you made it work with custom source.
I guess your application is probably stateless. If so, another option might
be having a geo-distributed Flink deployment. That means there will be TM
in different datacenter to form a single Flink cluster. This will also
I have found a way??
select row(msg.f1) from table.
-- --
??: ""<2463...@qq.com>;
: 2019??9??4??(??) 10:57
??:
""<2463...@qq.com>;"JingsongLee";"user";
: ??question
I want to output the quer
We want to calculate one day's uv and show the result every minute .
We have implemented this by java code:
dataStream.keyBy(dimension)
.incrementWindow(Time.days(1), Time.minutes(1))
.uv(userId)
The input data is big. So we use Va
12 matches
Mail list logo