Hi!
Executing a set of statements with SQL client is supported since Flink 1.13
[1]. Please consider upgrading your Flink version.
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sqlclient/#execute-a-set-of-sql-statements
方汉云 于2021年11月1日周一 下午8:31写道:
> Hi,
>
>
>
Hi,
I used offical flink-1.12.5 package,configuration sql-client-defaults.yaml,run
bin/sql-client.sh embedded
cat conf/sql-client-defaults.yaml
catalogs:
# A typical catalog definition looks like:
- name: myhive
type: hive
hive-conf-dir: /apps/conf/hive
default-database: de
Hi,
If you are using sql-client, you can try:
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sqlclient/#execute-a-set-of-sql-statements
If you are using TableEnvironment, you can try statement set too:
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/common/
Hi
You can use like this:
```java
val calciteParser = new
CalciteParser(SqlUtil.getSqlParserConfig(tableEnv.getConfig))
sqlArr
.foreach(item => {
println(item)
val itemNode = calciteParser.parse(item)
itemNode match {
case sqlSet: SqlSet => {
Hi:
I use flink sql,and run a script that has one souce an two sink,I can
see 2 jobs runing through webUI,is that normal?
Can some way to ensure only run on job that has one source and two sink? Thank
you
n-batch of records.
>>
>> Best,
>> Jark
>>
>> [1]:
>> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/tuning/streaming_aggregation_optimization.html#minibatch-aggregation
>>
>>
>> On Wed, 4 Nov 2020 at 23:01, Henry Dai wrote:
://ci.apache.org/projects/flink/flink-docs-master/dev/table/tuning/streaming_aggregation_optimization.html#minibatch-aggregation
On Wed, 4 Nov 2020 at 23:01, Henry Dai wrote:
>
> Dear flink developers&users
>
> I have a question about flink sql, It gives me a lot of trouble, Thank
>
Dear flink developers&users
I have a question about flink sql, It gives me a lot of trouble, Thank
you very much for some help.
Lets's assume we have two data stream, `order` and `order_detail`, they
are from mysql binlog.
Table `order` schema:
id int
Dear flink developers&users
I have a question about flink sql, It gives me a lot of trouble, Thank
you very much for some help.
Lets's assume we have two data stream, `order` and `order_detail`, they
are from mysql binlog.
Table `order` schema:
id int
Hi, Roc
> Does Flink-SQL support fetching Mysql meta information automaticly in the
> latest version, ? If not, could the you adding this feature ?
You can obtain the latest meta information(table schema) by using Flink
JdbcCalatag[1], only PostgresCatalog is implemented, user can implemented
Hello,
Does Flink-SQL support fetching Mysql meta information automaticly in the
latest version, ? If not, could the you adding this feature ?
Thank you.
Best, Roc.
11 matches
Mail list logo