Mrzyxing commented on code in PR #20510: URL: https://github.com/apache/flink/pull/20510#discussion_r948566977
########## docs/content.zh/docs/try-flink/table_api.md: ########## @@ -163,44 +161,41 @@ tEnv.executeSql("CREATE TABLE spend_report (\n" + ")"); ``` -The second table, `spend_report`, stores the final results of the aggregation. -Its underlying storage is a table in a MySql database. +第二张 `spend_report` 表存储聚合后的最终结果,底层存储是 MySQL 数据库中的一张表。 -#### The Query +#### 查询数据 -With the environment configured and tables registered, you are ready to build your first application. -From the `TableEnvironment` you can read `from` an input table to read its rows and then write those results into an output table using `executeInsert`. -The `report` function is where you will implement your business logic. -It is currently unimplemented. +配置好环境并注册好表后,你就可以开始开发你的第一个应用了。 +通过 `TableEnvironment` ,你可以 `from` 输入表读取数据,然后将结果调用 `executeInsert` 写入到输出表。 +函数 `report` 用于实现具体的业务逻辑,这里暂时未实现。 ```java Table transactions = tEnv.from("transactions"); report(transactions).executeInsert("spend_report"); ``` -## Testing +## 测试 -The project contains a secondary testing class `SpendReportTest` that validates the logic of the report. -It creates a table environment in batch mode. +项目还包含一个测试类 `SpendReportTest`,辅助验证报表逻辑。 +该测试类的表环境使用的是批处理模式。 ```java EnvironmentSettings settings = EnvironmentSettings.inBatchMode(); TableEnvironment tEnv = TableEnvironment.create(settings); ``` -One of Flink's unique properties is that it provides consistent semantics across batch and streaming. -This means you can develop and test applications in batch mode on static datasets, and deploy to production as streaming applications. +提供批流统一的语义是 Flink 的特性,这意味着应用的开发和测试可以在批模式下使用静态数据集完成,而实际部署到生产时再切换为流式。 Review Comment: Prefered '重要特性' so far. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org