gt; Zhanghao Chen
> --
> *From:* Feng Jin
> *Sent:* Friday, March 8, 2024 9:46
> *To:* Xuyang
> *Cc:* Robin Moffatt ; user@flink.apache.org <
> user@flink.apache.org>
> *Subject:* Re: Re: Running Flink SQL in production
>
> Hi,
>
>
nghao Chen
From: Feng Jin
Sent: Friday, March 8, 2024 9:46
To: Xuyang
Cc: Robin Moffatt ; user@flink.apache.org
Subject: Re: Re: Running Flink SQL in production
Hi,
If you need to use Flink SQL in a production environment, I think it would be
better to use the Table API [1] and package
Hi,
If you need to use Flink SQL in a production environment, I think it would
be better to use the Table API [1] and package it into a jar.
Then submit the jar to the cluster environment.
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/common/#sql
Best,
Feng
On Th
Hi.
Hmm, if I'm mistaken, please correct me. Using a SQL client might not be very
convenient for those who need to verify the
results of submissions, such as checking for exceptions related to submission
failures, and so on.
--
Best!
Xuyang
在 2024-03-07 17:32:07,"Robin Moffatt"
I'm reading the deployment guide[1] and wanted to check my understanding.
For deploying a SQL job into production, would the pattern be to write the
SQL in a file that's under source control, and pass that file as an
argument to SQL Client with -f argument (as in this docs example[2])?
Or script a