Hi,
I'm playing with a Flink 1.18 demo with the auto-scaler and the adaptive
scheduler.
The operator can correctly collect data and order the job to scale up, but
it'll take the job several times to reach the required parallelism.
E.g. The original parallelism for each vertex is something like b
Hi Chen,
You should tell flink which table to insert by “INSERT INTO XXX SELECT XXX”.
For single non insert query, flink will collect output to the console
automatically. Therefore, you don’t need to add insert also works.
But you must point out target table specifically when you need to write
Hi,
Using Flink 1.18 and Java 17, I am trying to read a text file from S3 using
env.readTextFile("s3://mybucket/folder1/file.txt"). When I run the app in
the IDE, I get the following error:
Caused by: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS
Credentials provided by DynamicTemp
Hi Elakiya,
You should use DML in the statement set instead of DQL .
Here is a simple example:
executeSql("CREATE TABLE source_table1 ..");
executeSql("CREATE TABLE source_table2 ..");
executeSql("CREATE TABLE sink_table1 ..");
executeSql("CREATE TABLE sink_table1 ..");
s
Hi Xuyang, Zhangao,
Thanks for your response, I have attached sample job files that I tried
with the Statementset and with two queries. Please let me know if you are
able to point out where I am possibly going wrong.
Thanks,
Elakiya
On Wed, Dec 6, 2023 at 4:51 PM Xuyang wrote:
> Hi, Elakiya.
>
Hi, Elakiya.
Are you following the example here[1]? Could you attach a minimal, reproducible
SQL?
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/insert/
--
Best!
Xuyang
At 2023-12-06 17:49:17, "elakiya udhayanan" wrote:
Hi Team,
I would like t
Hi Elakiya,
You can try executing TableEnvironmentImpl#executeInternal for non-insert
statements, then using StatementSet.addInsertSql to add multiple insertion
statetments, and finally calling StatementSet#execute.
Best,
Zhanghao Chen
From: elakiya udhayanan
S
Hi Team,
I would like to know the possibility of having two sinks in a single Flink
job. In my case I am using the Flink SQL based job where I try to consume
from two different Kafka topics using the create table (as below) DDL and
then use a join condition to correlate them and at present write i
Hi Ethan,
Pekko is basically a fork of Akka before its license change, so the usage is
almost the same. From the exception posted, it looks like you are trying to
connector to a terminated dispatcher, which usually indicates some exceptions
on the JobManager side. You can try checking the JM lo
Never mind. The issue was fix due to the service account permission missing
“patch” verb. Which lead to RPC service not started.
> On Dec 5, 2023, at 1:40 PM, Ethan T Yang wrote:
>
> Hi Flink users,
> After upgrading Flink ( from 1.13.1 -> 1.18.0), I noticed the an issue when
> HA is enabled.(
10 matches
Mail list logo