Thanks FG for your recommendations. Let me try that. Thanks for your time.
On Thursday, February 17, 2022, 04:40:07 AM EST, Francesco Guardiani
wrote:
Hi,
The SQL syntax is not supported, as the SQL standard itself does not allow it.
It sounds strange that it fails at validation phas
Hi Folks:
I am using 'kafka' connector and joining with data from jdbc source (using
connector). I am using Flink v 1.14.3. If I do a left outer join between
kafka source and jdbc source, and try to save it to another kafka sink using
connectors api, I get the following exception:
Exception i
Hi Ananth,
It's already code freeze for 1.15 and you can refer to [1] for more details.
Regards,
Dian
[1] https://www.mail-archive.com/dev@flink.apache.org/msg54262.html
On Sun, Feb 20, 2022 at 1:51 AM Ananth Gundabattula <
agundabatt...@darwinium.com> wrote:
> Thanks a lot Wong.
>
>
>
> I was
Hello,
And if you want to go for a deployment, consider using a statefulset
instead. This way you are certain that, on upgrade, it will let the old
version exit before it starts the new version.
Greetings,
Frank
On 22.02.22 00:23, Austin Cawley-Edwards wrote:
Hey Marco,
There’s unfortuna
Thank you, Yang. That was it! Specifying "--fromSavepoint" and
"--allowNonRestoredState" for "run-application" together did the trick.
I was a bit confused, because when you run "flink run-application --help",
it only tells you about the "--executor" and "--target" options. So I
assumed I should p
We are developing a new feature for our Flink application that relies upon
joining multiple Kafka Streams and uses Flink State to handle joining
information asynchronously. Recently as the volume of data has been
growing, we've been noticing a couple exceptions while trying to enable the
feature.
Hi Fabian,
Thanks for the response! I'll take a look at the CSVReaderFormat.
Our team is interested in contributing to Parquet. However, our capacity
for the current sprint is fully committed to other workstreams. I'll put
this issue onto the backlog and see how it stacks against our internal
pri
Hello Flink Community,
I’m looking for an interface to convert Avro records to Flink row.
To be more specific, I’m aware there is this interface that converts
serialized Avro records to Flink rows (
https://github.com/apache/flink/blob/master/flink-formats/flink-avro/src/main/java/org/apache/flin
I keep on receiving this exception during the execution of a simple job that
receives time series data via Kafka, transforms it into avro format, and then
sends into a Kafka topic consumed by druid.
Any advise would be appreciated as to how to resolve this type of error.
I'm using Apache Kafka
Hi Yun,
The joined data is the versioned table in this case, I managed to get it as
far as fixing all of the static errors but the temporal join just doesn't
have a result... No idea what's going on.
In reality I don't think we even want a temporal join, we just want to add
a few extra columns to
Hi all,
I recently put up a question about a deduplication query related to a join
and realised that I was probably asking the wrong question. I'm using Flink
1.15-SNAPSHOT (97ddc39945cda9bf1f52ab159852fdb606201cf2) as we're using the
RabbitMQ connector with pyflink. We won't go to prod until 1.15
The config options configured by -D param should take effect. It is also
the recommended way instead of CLI options(e.g. --fromSavepoint).
Not only the K8s application, it also does not work for yarn application
and yarn per-job mode.
I believe it is indeed a bug in the current implementation and h
Hello. I am looking for a way to expose flink metrics via opentelemerty to
the gcp could monitoring dashboard.
Does anyone has experience with that?
If it is not directly possible we thought about using permethous as a
middlewere. If you have experience with that i would appreciate any
guidance.
13 matches
Mail list logo