https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/streaming/time_attributes.html#event-time
On Sun, Feb 21, 2021 at 8:43 AM Aeden Jameson
wrote:
> In my job graph viewed through the Flink UI I see a task named,
>
> rowtime field: (#11: event_time TIME ATTRIBUTE(ROWTIME))
>
> that
Hi,
Is there some way that I can configure an operator based on the key in a
stream?
Eg: If the key is 'abcd', then create a window of size X counts, if the key
is 'bfgh', then create a window of size Y counts.
Is this scenario possible in flink
In my job graph viewed through the Flink UI I see a task named,
rowtime field: (#11: event_time TIME ATTRIBUTE(ROWTIME))
that has an upstream Kafka source task. What exactly does the rowtime task do?
--
Thank you,
Aeden
I have a table with two BIGINT fields for start and end of an event as UNIX
time in milliseconds. I want to be able to have a resulting column with the
delta in milliseconds and group by that difference. Also, I want to be able
to have aggregations with window functions based upon the `end` field.
Adding "list" to verbs helps, do I need to add anything else ?
From: Alexey Trenikhun
Sent: Saturday, February 20, 2021 2:10 PM
To: Flink User Mail List
Subject: stop job with Savepoint
Hello,
I'm running per job Flink cluster, JM is deployed as Kubernetes Job w
Hello,
I'm running per job Flink cluster, JM is deployed as Kubernetes Job with
restartPolicy: Never, highavailability is KubernetesHaServicesFactory. Job runs
fine for some time, configmaps are created etc. Now in order to upgrade Flink
job, I'm trying to stop job with savepoint (flink stop $J
Hello,
I launched a job with a larger load on hadoop yarn cluster.
The Job finished after running 5 hours, I didn't find any error from
JobManger log besides this connect exception.
*2021-02-20 13:20:14,110 WARN akka.remote.transport.netty.NettyTransport
- Remote connection
I'm using the latest Flink 1.12 and the timestamps precision is coming from
Debezium, which I think is a standard ISO-8601 timestamp.
On Thu, 18 Feb 2021 at 16:19, Timo Walther wrote:
> Hi Sebastián,
>
> which Flink version are you using? And which precision do the timestamps
> have?
>
> This lo
I mean the SQL queries being validated when I do `mvn compile` or any
target that runs that so that basic syntax checking is performed without
having to submit the job to the cluster.
On Thu, 18 Feb 2021 at 16:17, Timo Walther wrote:
> Hi Sebastián,
>
> what do you consider as compile time? If y
Hello, *Jörn Franke*.
Thank you for reply.
If I correctly realise your answer, the watermark Flink mechanism should
help me sort events in order I need. So I should dig deeper in that issue.
For example, I read three topics, make joins and after get two events by the
same user in this order:
Hi,
What is the way to run the code (from eclipse or intellij idea) to the
Apache Flink UI?
Thank you!
You are working in a distributed system so event ordering by time may not be
sufficient (or most likely not). Due to network delays, devices offline etc it
can happen that an event arrives much later although it happened before. Check
watermarks in flink and read on at least once, mostly once an
12 matches
Mail list logo