hello everyone,
right now I'm using flink to sync from mysql to elasticsearch and so far so
good. If we insert, update, or delete it will sync from mysql to elastic
without any problem.
The problem I have right now is the application is not actually doing hard
delete to the records in mysql, but
Hi everyone,
I noticed that a simple INNER JOIN in Flink SQL behaves
non-deterministicly. I'd like to understand if it's expected and whether an
issue is created to address it.
In my example, I have the following query:
SELECT a.funder, a.amounts_added, r.amounts_removed FROM table_a AS a JOIN
Hi Sai,
If you directly utilize JdbcSink, you may not be able to catch this
exception.
But, you can create your own SinkFunction and invoke the `invoke` method of
JdbcSink and catch the Exception, and invoke the dlq sink.
As shown below,
```
public class SinkWrapper {
private JdbcSink jdbcSink;
CTEs are supported, you can see an example in the docs [1] [2]. In the
latter doc, it also says
> CTEs are supported in Views, CTAS and INSERT statement
So I'm just guessing here, but your SQL doesn't look right.
The CTE needs to return a column called `pod`, and the `FROM` clause for
the `SELECT
Hello,
I have been able to use queries with cte in this syntax –
INSERT INTO t1
WITH cte1 AS (SELECT
),
cte2 AS (SELECT
)
(SELECT
*
FROM
That solved it!! Thank you so much!
On Thu, Oct 19, 2023 at 4:32 PM Mate Czagany wrote:
> Hello,
>
> Please look into using 'kubernetes.decorator.hadoop-conf-mount.enabled'
> [1] that was added for use cases where the user wishes to skip adding these
> Hadoop mount decorators. It's true by defau
Hello,
Please look into using 'kubernetes.decorator.hadoop-conf-mount.enabled' [1]
that was added for use cases where the user wishes to skip adding these
Hadoop mount decorators. It's true by default, but by setting it to false
Flink won't add this mount.
[1]
https://nightlies.apache.org/flink/f
Hi,
I've been using HDFS with Flink for checkpoint and savepoint storage which
works perfectly fine. Now I have another use case where I want to read and
write to HDFS from the application code as well. For this, I'm using the
"pyarrow" library which is already installed with PyFlink as a dependen
Thanks for the feedback
We discussed with some devs and we are going to release the 1.6.1 based on
these batches in the next week or so.
Gyula
On Thu, Oct 19, 2023 at 9:44 AM Evgeniy Lyutikov
wrote:
> Hi.
> I patched my copy of the 1.6.0 operator with edits from
> https://github.com/apache/fli
Hi.
I patched my copy of the 1.6.0 operator with edits from
https://github.com/apache/flink-kubernetes-operator/pull/673
This solved the problem
От: Tony Chen
Отправлено: 19 октября 2023 г. 4:18:36
Кому: Evgeniy Lyutikov
Копия: user@flink.apache.org; Gyula Fór
Hi Team,
I have a Flink job which uses the upsert-kafka connector to consume the
events from two different Kafka topics (confluent avro serialized) and
write them to two different tables (in Flink's memory using the Flink's SQL
DDL statements).
I want to correlate them using the SQL join statemen
11 matches
Mail list logo