Hi Lan,

You can just set the configuration 'pipeline.name' = '{job_name}'. You could do 
that via -D parameter when you submit the job using Flink CLI or directly set 
it in the code 
(https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/config/).
Configuration | Apache 
Flink<https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/config/>
Configuration # By default, the Table & SQL API is preconfigured for producing 
accurate results with acceptable performance. Depending on the requirements of 
a table program, it might be necessary to adjust certain parameters for 
optimization. For example, unbounded streaming programs may need to ensure that 
the required state size is capped (see streaming concepts).
nightlies.apache.org


Best,
Zhanghao Chen
________________________________
From: lan tran <indigoblue7...@gmail.com>
Sent: Wednesday, March 30, 2022 18:52
To: user@flink.apache.org <user@flink.apache.org>
Subject: Naming sql_statment job


Hi team,

When I was using Table API to submit the SQL job using execute_query(), the 
name is created by Flink. However, I wonder there is a way to config that name.

I see that in the SQL-Client they have this statement
SET 'pipeline.name' = '{job_name}'. Wonder that if it can execute this using 
execute_query (throw exception), sql_query (throw exception) or something else ?



Thank team.
Best,
Quynh



Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows


Reply via email to