Hi Kaibo,

> Validate SQL syntax not need to depend on connector jar
At present, sql function strongly need jar dependent support too , but the
overall approach is still under discussion, and there is no clear plan at
present.
But you are right, it really important for platform users.
Another way is to start a process for each SQL, which contains the user's
jars.

> what is the suggested way to validate a FLINK SQL?
- If you use "StreamTableEnvironment.create", every "sqlUpdate" will
generate execution plan, so it should contains validation.
- If you use "TableEnvironment.create(EnvironmentSettings)". After
"sqlUpdate", will buffer modifyOperations in table env. And I think you can
use "TableEnvironment.explain(boolean)" to validate SQL, it will generate
execution plan, will validate sink too.

Best,
Jingsong Lee

On Fri, Dec 27, 2019 at 5:44 PM Kaibo Zhou <zkb...@gmail.com> wrote:

> Hi,
>
> As a platform user, I want to integrate Flink SQL with the platform. The
> usage
> scenario is:users register table/udf to catalog service, and then write SQL
> scripts like: "insert into xxx select from xxx" through Web SQLEditor, the
> platform need to validate the SQL script after each time the user changes
> the SQL.
>
> One problem I encountered is SQL validate depend on connector jar which
> lead to many problem. More details can see the issue[1] I just submitted.
>
> Another problem I found is when I use `tEnv.sqlUpdate("INSERT INTO
> sinkTable SELECT f1,f2 FROM sourceTable");` to do SQL validation, I found
> it NOT validate the sinkTable includes schema and table name.
>
> I am confused what is the suggested way to validate a FLINK SQL? Maybe
> Flink could provide a suggested way to let SQL be easily integrated by
> external platforms.
>
> [1]: https://issues.apache.org/jira/browse/FLINK-15419
>
> Best,
> Kaibo
>


-- 
Best, Jingsong Lee

Reply via email to