Hi, Вова.
Junrui is right. As far as I know, every time a SQL is re-executed, Flink will
regenerate the plan, generate jobgraph,
and execute the job again. There is no cache to speed up this process. State
beckend is used when your job is stopped
and you want to continue running from the state b
Hi Вова
In Flink, there is no built-in mechanism for caching SQL query results;
every query execution is independent, and results are not stored for future
queries. The StateBackend's role is to maintain operational states within
jobs, such as aggregations or windowing, which is critical for ensur
Hi, Tamir.
This is an expected behavior. The flink-connector-base is already included
in flink-dist and we will not package it in the externalized connectors.
You could see this issue[1] for more details.
Best,
Hang
[1] https://issues.apache.org/jira/browse/FLINK-30400?filter=-1
Tamir Sagi 于20
Hi community:
I'm working on a flink cluster on YARN application mode,which is
authenticated by Kuberos.
It works well on flink run and flink list command as follows :
./bin/flink run-application -t yarn-application
> ./examples/streaming/TopSpeedWindowing.jar
> ./bin/flink list -t yarn-applicati
This worked perfectly Xuyang, nice :)
Thanks!
On Thu, Jan 11, 2024 at 12:52 PM Xuyang wrote:
> Hi, Gyula.
> If you want flink to fill the unspecified column with NULL, you can try
> the following sql like :
> ```
> INSERT INTO Sink(a) SELECT a from Source
> ```
>
>
> --
> Best!
> Xuyang
Hi
I updated dynamodb connector to 4.2.0-1.18 but it does not provide
flink-connector-base dependency where in 4.1.0-1.17 it does.[1]
it appears it its pom's definition only as test-jar in scope test
I'm working with custom
#org.apache.flink.connector.base.sink.writer.ElementConverter which a
Hi, Gyula.
If you want flink to fill the unspecified column with NULL, you can try the
following sql like :
```
INSERT INTO Sink(a) SELECT a from Source
```
--
Best!
Xuyang
在 2024-01-11 16:10:47,"Giannis Polyzos" 写道:
Hi Gyula,
to the best of my knowledge, this is not feasible a
Hi Everyone,
I'm currently looking to understand the caching mechanism in Apache Flink
in general. As part of this exploration, I have a few questions related to
how Flink handles data caching, both in the context of SQL queries and more
broadly.
When I send a SQL query for example to PostgreS
Hi Gyula,
to the best of my knowledge, this is not feasible and you will have to do
something like *CAST(NULL AS STRING)* to insert null values manually.
Best,
Giannis
On Thu, Jan 11, 2024 at 9:58 AM Gyula Fóra wrote:
> Hi All!
>
> Is it possible to insert into a table without specifying all co