Hi Leonard,
Wow, that's great! It works like a charm.
I've never considered this approach at all.
Thanks a lot.
Best,
Dongwon
On Mon, Jul 6, 2020 at 11:26 AM Leonard Xu wrote:
> Hi, Kim
>
> The reason your attempts (2) and (3) failed is that the json format does
> not support convert a BIGINT
Hi, everyone!
When i use flink1.10 to define table, and i want to define the json array
as the string type. But the query resutl is null when i execute the program.
The detail code as follow:
package com.flink;
import
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import
Hi Kostas,
I'm confused about the implementation of the temporary workaround. Would it
be possible to get a little more detail?
> you could simply not stop the job after the whole input is processed
How does one determine when the job has processed the whole input?
> then wait until the outp
In a slightly different variation of sequence (checkpoint x, savepoint y,
redeploy/restart job from savepoint y, checkpoint x+1), checkpoint x+1
builds the incremental diff on savepoint y, right?
On Sun, Jul 5, 2020 at 8:08 PM Steven Wu wrote:
>
> In this sequence of (checkpoint x, savepoint y,
In this sequence of (checkpoint x, savepoint y, checkpoint x+1), does
checkpoint x+1 build the incremental diff based on checkpoint x or
savepoint y?
Thanks,
Steven
Hi, Kim
The reason your attempts (2) and (3) failed is that the json format does not
support convert a BIGINT to TIMESTAMP, you can first define the BIGINT field
and then use a computed column to extract TIMESTAMP field, you can also define
the time attribute on TIMESTAMP filed for using time-b
As I already mentioned,
> I would suggest to look into the jobmanager logs and gc logs, see if
> there's any problem that prevent the process from handling the rpc messages
> timely.
>
The Akka ask timeout does not seem to be the root problem to me.
Thank you~
Xintong Song
On Sat, Jul 4, 202
Hi Benchao
The capacity is 100
Parallelism is 8
Rpc req is 20ms
Thanks
On Sun, 5 Jul 2020, 6:16 Benchao Li, wrote:
> Hi Mark,
>
> Could you give more details about your Flink job?
> - the capacity of AsyncDataStream
> - the parallelism of AsyncDataStream operator
> - the time of per blocked r
Hi SmileSmile
As the OOM problem, maybe you can try to get a memory dump before OOM,
after you get the memory dump, you can know who consumes more memory as
expected.
Best,
Congxian
Yun Tang 于2020年7月3日周五 下午3:04写道:
> Hi
>
> If you do not enable checkpoint and have you ever restored checkpoint
Hi
First, Could you please try this problem still there if use flink 1.10 or
1.11?
It seems strange, from the error message, here is an error when trying to
convert a non-Window state(VoidNameSpace) to a Window State (serializer is
the serializer of Window state, but the state is non-Window state
10 matches
Mail list logo