Hi Devin,
I gave a talk called "Storing State Forever: Why It Can Be Good For Your
Analytics", which may be very relevant:
- https://www.youtube.com/watch?v=tiGxEGPyqCg
-
https://www.slideshare.net/sap1ens/storing-state-forever-why-it-can-be-good-for-your-analytics
On Wed, May 25, 2022 at 8:04 PM
Hi,
I'm wondering if anyone could share if they've tried using infinite joins
to join large amounts of state in real-time. If so, how did it go? What was
the scale? Were there any "gotchas" or things that needed to be tuned?
We're considering trying this at scale, and I'd love to hear some words o
Hi.
Could you tell us the version of the Flink you are using? What's the
version of commons-collections:commons-collections:jar when you compile the
sql and the version in the cluster? It's possible you compile the sql and
submit with the different version.
I am not sure how you submit your flink
Hi.
It will also influence how Flink serialize/deserialize the RowData. For
example, Flink will build the TimestampDataSerializer with specified
precision in the type. You can see it only extract the expected part to
serialize[1]. But for char/varchar type, the serializer will not truncate
the str
Hi Flink Community
We are using Flink version 1.13.5 for our application and every time the
job restarts, Flink Job metrics are flattened following the restart.
For e.g. we are using lastCheckpointDuration and on 05/05 our job restarted
and at the same time the checkpoint duration metric flattened
If the performance of a stateful function (FaaS) is very slow, how does this
impact performance on the Flink StateFun Cluster?
I am trying to figure out what is too slow for a FaaS. I expect the Flink
StateFun Cluster to receive about 2000 events per a minute, but some, not all
FaaS might take
I have a Flink job that has been running with Flink 1.14.4 perfectly for a
few months.
I tried upgrading to Flink 1.15.0. There are no error messages or
exceptions, it runs perfectly fine for several hours, but after a few hours
the Flink app starts to lag in processing an input Kafka topic. I can
Hi Himanshu,
The short answer is you should configure Stateful Functions in your job. Here
is an example
https://github.com/f1xmAn/era-locator/blob/34dc4f77539195876124fe604cf64c61ced4e5da/src/main/java/com/github/f1xman/era/StreamingJob.java#L68.
Check out this article on Flink DataStream and
Unsubscribe
Hi,
Thanks a lot for your clarifications. It makes perfect sense.
Mads
From: Gyula Fóra
Sent: 23 May 2022 15:39
To: Mads Ellersgaard Kalør
Cc: user@flink.apache.org
Subject: Re: Flink Kubernetes Operator: Specifying env variables from ConfigMaps
Hi Mads!
I th
IMO, the behaviors depends on how you convert your string data from extern
system to Flink's intern data or, conversely.
I think it's more like a hint to tell how to convert the string data between
extern system including source and sink.
Best regards,
Yuxia
发件人: "Krzysztof Chmielewski"
I have feagured this out.
It was because I put a flink-connector-tidb-cdc.jar in my Flink's lib folder
earlier, and it is shipped with scala 2.11, while my flink is shipped with
scala2.12.
Some how when I submit a job with GroupAggregate operator, it needs to load
keyed rocksdb states, and he
Glad to see you have resolved the issue!
If you want to learn more about the Source API, the Flink document [1] has a
detailed description about it. The original proposal FLIP-27 [2] is also a good
reference.
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/dev/datastream/
Thank you Qingsheng, this context helps a lot!
And once again thank you all for being such a helpful community!
P.S. I actually struggled for a bit trying to understand why my refactored
solution which accepts DataStream<> wouldn't work ("no operators defined in
the streaming topology"). Turns ou
Hi,
some classes extending LogicalType.java such as VarCharType, BinaryType,
CharType and few others have an optional argument "length". If not
specified, length is set to default value which is 1.
I would like to ask, what are the implications of that? What can happen if
I use the default length
Hi dear engineers,
Resently I encountered another issue, after I submited a flink sql job, it
throws an exception:
Caused by: java.lang.ClassCastException: cannot assign instance of
org.apache.commons.collections.map.LinkedMap to field
org.apache.flink.streaming.connectors.kafka.FlinkKafka
16 matches
Mail list logo