Github user lincoln-lil commented on the issue: https://github.com/apache/flink/pull/3829 @fhueske Agree with you, we should maintain the consistency of the API. For TableSink, register its schema before using it sounds reasonable. My concern is this is a breaking change of the API and the new behavior will affect user's existing code. On the other hand, in RDBMS we can use the 'SQL CREATE TABLE AS statement' to create a table from an existing table by copying the existing table's columns. When creating a table in this way, the new table will be populated with the records from the existing table (Based on the SELECT Statement). This is a common operation, I think we need to support this functionality. Based on these considerations, I propose to retain the current 'writeToSink' method, keep the current configure behavior to support the derived sink schema. Add a new 'insertInto' method to support the pre-defined schema, and will do type validation within it(for now only support dml insert in SQL, later we can add support for 'create table as statement', so that TableAPI and SQL's semantics are exactly same) I agree distinguishing the insert and select query method, but I'm concerned about the method name itself, 'sql' covers all types of query, not limited to select. Standard sql dml includes the select / insert / update / delete, so if we need to distinguish the sub-type of query, I suggest the method named select / insert (or differs from the 'select' method name in 'table.scala', named dmlSelect / dmlInsert), then do the corresponding check in each method. The insert method's return type can be declared as Unit rather than null. What do you think? Thanks, Lincoln
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---