twalthr commented on a change in pull request #12577: URL: https://github.com/apache/flink/pull/12577#discussion_r440696496
########## File path: docs/dev/table/catalogs.md ########## @@ -104,29 +119,67 @@ Flink SQL> CREATE TABLE mytable (name STRING, age INT) WITH (...); Flink SQL> SHOW TABLES; mytable {% endhighlight %} +</div> +</div> + For detailed information, please check out [Flink SQL CREATE DDL]({{ site.baseurl }}/dev/table/sql/create.html). -### Using Java/Scala/Python API +### Using Java/Scala -Users can use Java, Scala, or Python API to create catalog tables programmatically. +Users can use Java or Scala to create catalog tables programmatically. <div class="codetabs" markdown="1"> <div data-lang="java" markdown="1"> {% highlight java %} TableEnvironment tableEnv = ... Review comment: We should start adding imports for Flink related classes that are important for the example. ########## File path: docs/dev/table/catalogs.md ########## @@ -104,29 +119,67 @@ Flink SQL> CREATE TABLE mytable (name STRING, age INT) WITH (...); Flink SQL> SHOW TABLES; mytable {% endhighlight %} +</div> +</div> + For detailed information, please check out [Flink SQL CREATE DDL]({{ site.baseurl }}/dev/table/sql/create.html). -### Using Java/Scala/Python API +### Using Java/Scala -Users can use Java, Scala, or Python API to create catalog tables programmatically. +Users can use Java or Scala to create catalog tables programmatically. <div class="codetabs" markdown="1"> <div data-lang="java" markdown="1"> {% highlight java %} TableEnvironment tableEnv = ... // Create a HiveCatalog -Catalog catalog = new HiveCatalog("myhive", null, "<path_of_hive_conf>", "<hive_version>"); +Catalog catalog = new HiveCatalog("myhive", null, "<path_of_hive_conf>", "<hive_version>") Review comment: this is Java why are we removing semicolons? ########## File path: docs/try-flink/table_api.zh.md ########## @@ -451,6 +457,7 @@ import org.apache.flink.streaming.api.TimeCharacteristic; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.table.api.Tumble; import org.apache.flink.table.api.bridge.java.StreamTableEnvironment; +import org.apache.flink.table.api.TableResult; Review comment: use a `org.apache.flink.table.api.*` for all table examples to simplify the imports ########## File path: docs/try-flink/table_api.md ########## @@ -462,14 +469,17 @@ public class SpendReport { tEnv.registerTableSource("transactions", new UnboundedTransactionTableSource()); tEnv.registerTableSink("spend_report", new SpendReportTableSink()); - tEnv + Table table = tEnv .scan("transactions") .window(Tumble.over("1.hour").on("timestamp").as("w")) .groupBy("accountId, w") - .select("accountId, w.start as timestamp, amount.sum") - .insertInto("spend_report"); + .select("accountId, w.start as timestamp, amount.sum"); - env.execute("Spend Report"); + // trigger execution + TableResult tableResult = table.executeInsert("spend_report"); + // wait job finished + tableResult.getJobClient().get() Review comment: side comment: This API looks very ugly and we should not document it like this. I also saw in our tests and didn't like it. How about we introduce a `TableResult.awaitCompletion(timeout)`? For DCL and DDL, it would continue immediately. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org