RocMarshal commented on a change in pull request #12798:
URL: https://github.com/apache/flink/pull/12798#discussion_r453698513



##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -180,99 +158,68 @@ FROM Ticker
     ) MR;
 {% endhighlight %}
 
-The query partitions the `Ticker` table by the `symbol` column and orders it 
by the `rowtime`
-time attribute.
+此查询将 `Ticker` 表按照 `symbol` 列进行分区并按照 `rowtime` 属性进行排序。
 
-The `PATTERN` clause specifies that we are interested in a pattern with a 
starting event `START_ROW`
-that is followed by one or more `PRICE_DOWN` events and concluded with a 
`PRICE_UP` event. If such
-a pattern can be found, the next pattern match will be seeked at the last 
`PRICE_UP` event as
-indicated by the `AFTER MATCH SKIP TO LAST` clause.
+`PATTERN` 子句指定我们对以下模式感兴趣:该模式具有开始事件 `START_ROW`,然后是一个或多个 `PRICE_DOWN` 事件,并以 
`PRICE_UP` 事件结束。如果可以找到这样的模式,如 `AFTER MATCH SKIP TO LAST` 子句所示,则从最后一个 `PRICE_UP` 
事件开始寻找下一个模式匹配。
 
-The `DEFINE` clause specifies the conditions that need to be met for a 
`PRICE_DOWN` and `PRICE_UP`
-event. Although the `START_ROW` pattern variable is not present it has an 
implicit condition that
-is evaluated always as `TRUE`.
+`DEFINE` 子句指定 `PRICE_DOWN` 和 `PRICE_UP` 事件需要满足的条件。尽管不存在 `START_ROW` 
模式变量,但它具有一个始终被评估为 `TRUE` 隐式条件。
 
-A pattern variable `PRICE_DOWN` is defined as a row with a price that is 
smaller than the price of
-the last row that met the `PRICE_DOWN` condition. For the initial case or when 
there is no last row
-that met the `PRICE_DOWN` condition, the price of the row should be smaller 
than the price of the
-preceding row in the pattern (referenced by `START_ROW`).
+模式变量 `PRICE_DOWN` 定义为价格小于满足 `PRICE_DOWN` 条件的最后一行。对于初始情况或没有满足 `PRICE_DOWN` 
条件的最后一行时,该行的价格应小于该模式中前一行(由 `START_ROW` 引用)的价格。
 
-A pattern variable `PRICE_UP` is defined as a row with a price that is larger 
than the price of the
-last row that met the `PRICE_DOWN` condition.
+模式变量 `PRICE_UP` 定义为价格大于满足 `PRICE_DOWN` 条件的最后一行。
 
-This query produces a summary row for each period in which the price of a 
stock was continuously
-decreasing.
+此查询为股票价格持续下跌的每个期间生成摘要行。
 
-The exact representation of the output rows is defined in the `MEASURES` part 
of the query. The
-number of output rows is defined by the `ONE ROW PER MATCH` output mode.
+在查询的 `MEASURES` 子句部分定义确切的输出行信息。输出行数由 `ONE ROW PER MATCH` 输出方式定义。
 
 {% highlight text %}
  symbol       start_tstamp       bottom_tstamp         end_tstamp
 =========  ==================  ==================  ==================
 ACME       01-APR-11 10:00:04  01-APR-11 10:00:07  01-APR-11 10:00:08
 {% endhighlight %}
 
-The resulting row describes a period of falling prices that started at 
`01-APR-11 10:00:04` and
-achieved the lowest price at `01-APR-11 10:00:07` that increased again at 
`01-APR-11 10:00:08`.
+该行结果描述了从 `01-APR-11 10:00:04` 开始的价格下跌期,在 `01-APR-11 10:00:07` 达到最低价格,到 
`01-APR-11 10:00:08` 再次上涨。
 
-Partitioning
+<a name="partitioning"></a>
+
+分区
 ------------
 
-It is possible to look for patterns in partitioned data, e.g., trends for a 
single ticker or a
-particular user. This can be expressed using the `PARTITION BY` clause. The 
clause is similar to
-using `GROUP BY` for aggregations.
+可以在分区数据中寻找模式,例如单个股票行情或特定用户的趋势。这可以用 `PARTITION BY` 子句来表示。该子句类似于对聚合使用 `GROUP BY`。

Review comment:
       ```suggestion
   可以在分区数据中寻找模式,例如单个股票行情或特定用户的趋势。这可以用 `PARTITION BY` 子句来表示。该子句类似于对 aggregation 
使用 `GROUP BY`。
   ```

##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -92,52 +80,43 @@ project.
 </dependency>
 {% endhighlight %}
 
-Alternatively, you can also add the dependency to the cluster classpath (see 
the
-[dependency section]({{ site.baseurl}}/dev/project-configuration.html) for 
more information).
+或者,也可以将依赖项添加到集群 classpath(查看 [dependency section]({% link 
dev/project-configuration.zh.md %}) 获取更多相关依赖信息)。
+
+如果你想在 [SQL Client]({% link dev/table/sqlClient.zh.md %}) 中使用 `MATCH_RECOGNIZE` 
子句,你无需执行任何操作,因为默认情况下包含所有依赖项。
+
+<a name="sql-semantics"></a>
 
-If you want to use the `MATCH_RECOGNIZE` clause in the
-[SQL Client]({{ site.baseurl}}/dev/table/sqlClient.html), you don't have to do 
anything as all the
-dependencies are included by default.
+### SQL 语义
 
-### SQL Semantics
+每个 `MATCH_RECOGNIZE` 查询都包含以下子句:
 
-Every `MATCH_RECOGNIZE` query consists of the following clauses:
+* [PARTITION BY](#partitioning) - 定义表的逻辑分区;类似于 `GROUP BY` 操作。
+* [ORDER BY](#order-of-events) - 指定传入行的排序方式;这是必须的,因为模式依赖于顺序。
+* [MEASURES](#define--measures) - 定义子句的输出;类似于 `SELECT` 子句。
+* [ONE ROW PER MATCH](#output-mode) - 输出方式,定义每个匹配项应产生多少行。
+* [AFTER MATCH SKIP](#after-match-strategy) - 
指定下一个匹配的开始位置;这也是一种控制单个事件可以属于多少个不同匹配的方法。
+* [PATTERN](#defining-a-pattern) - 允许使用类似于 _正则表达式_ 的语法构造搜索的模式。
+* [DEFINE](#define--measures) - 本部分定义了模式变量必须满足的条件。
 
-* [PARTITION BY](#partitioning) - defines the logical partitioning of the 
table; similar to a
-  `GROUP BY` operation.
-* [ORDER BY](#order-of-events) - specifies how the incoming rows should be 
ordered; this is
-  essential as patterns depend on an order.
-* [MEASURES](#define--measures) - defines output of the clause; similar to a 
`SELECT` clause.
-* [ONE ROW PER MATCH](#output-mode) - output mode which defines how many rows 
per match should be
-  produced.
-* [AFTER MATCH SKIP](#after-match-strategy) - specifies where the next match 
should start; this is
-  also a way to control how many distinct matches a single event can belong to.
-* [PATTERN](#defining-a-pattern) - allows constructing patterns that will be 
searched for using a
-  _regular expression_-like syntax.
-* [DEFINE](#define--measures) - this section defines the conditions that the 
pattern variables must
-  satisfy.
+<span class="label label-danger">注意</span> 目前,`MATCH_RECOGNIZE` 
子句只能应用于追加表([append table]({% link dev/table/streaming/dynamic_tables.zh.md 
%}#update-and-append-queries))。此外,它还总是生成一个追加表。

Review comment:
       ```suggestion
   <span class="label label-danger">注意</span> 目前,`MATCH_RECOGNIZE` 
子句只能应用于[追加表]({% link dev/table/streaming/dynamic_tables.zh.md 
%}#update-and-append-queries)。此外,它也总是生成一个追加表。
   ```

##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -92,52 +80,43 @@ project.
 </dependency>
 {% endhighlight %}
 
-Alternatively, you can also add the dependency to the cluster classpath (see 
the
-[dependency section]({{ site.baseurl}}/dev/project-configuration.html) for 
more information).
+或者,也可以将依赖项添加到集群 classpath(查看 [dependency section]({% link 
dev/project-configuration.zh.md %}) 获取更多相关依赖信息)。
+
+如果你想在 [SQL Client]({% link dev/table/sqlClient.zh.md %}) 中使用 `MATCH_RECOGNIZE` 
子句,你无需执行任何操作,因为默认情况下包含所有依赖项。
+
+<a name="sql-semantics"></a>
 
-If you want to use the `MATCH_RECOGNIZE` clause in the
-[SQL Client]({{ site.baseurl}}/dev/table/sqlClient.html), you don't have to do 
anything as all the
-dependencies are included by default.
+### SQL 语义
 
-### SQL Semantics
+每个 `MATCH_RECOGNIZE` 查询都包含以下子句:
 
-Every `MATCH_RECOGNIZE` query consists of the following clauses:
+* [PARTITION BY](#partitioning) - 定义表的逻辑分区;类似于 `GROUP BY` 操作。
+* [ORDER BY](#order-of-events) - 指定传入行的排序方式;这是必须的,因为模式依赖于顺序。
+* [MEASURES](#define--measures) - 定义子句的输出;类似于 `SELECT` 子句。
+* [ONE ROW PER MATCH](#output-mode) - 输出方式,定义每个匹配项应产生多少行。
+* [AFTER MATCH SKIP](#after-match-strategy) - 
指定下一个匹配的开始位置;这也是一种控制单个事件可以属于多少个不同匹配的方法。

Review comment:
       ```suggestion
   * [AFTER MATCH SKIP](#after-match-strategy) - 
指定下一个匹配的开始位置;这也是控制单个事件可以属于多少个不同匹配项的方法。
   ```

##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -180,99 +158,68 @@ FROM Ticker
     ) MR;
 {% endhighlight %}
 
-The query partitions the `Ticker` table by the `symbol` column and orders it 
by the `rowtime`
-time attribute.
+此查询将 `Ticker` 表按照 `symbol` 列进行分区并按照 `rowtime` 属性进行排序。
 
-The `PATTERN` clause specifies that we are interested in a pattern with a 
starting event `START_ROW`
-that is followed by one or more `PRICE_DOWN` events and concluded with a 
`PRICE_UP` event. If such
-a pattern can be found, the next pattern match will be seeked at the last 
`PRICE_UP` event as
-indicated by the `AFTER MATCH SKIP TO LAST` clause.
+`PATTERN` 子句指定我们对以下模式感兴趣:该模式具有开始事件 `START_ROW`,然后是一个或多个 `PRICE_DOWN` 事件,并以 
`PRICE_UP` 事件结束。如果可以找到这样的模式,如 `AFTER MATCH SKIP TO LAST` 子句所示,则从最后一个 `PRICE_UP` 
事件开始寻找下一个模式匹配。
 
-The `DEFINE` clause specifies the conditions that need to be met for a 
`PRICE_DOWN` and `PRICE_UP`
-event. Although the `START_ROW` pattern variable is not present it has an 
implicit condition that
-is evaluated always as `TRUE`.
+`DEFINE` 子句指定 `PRICE_DOWN` 和 `PRICE_UP` 事件需要满足的条件。尽管不存在 `START_ROW` 
模式变量,但它具有一个始终被评估为 `TRUE` 隐式条件。
 
-A pattern variable `PRICE_DOWN` is defined as a row with a price that is 
smaller than the price of
-the last row that met the `PRICE_DOWN` condition. For the initial case or when 
there is no last row
-that met the `PRICE_DOWN` condition, the price of the row should be smaller 
than the price of the
-preceding row in the pattern (referenced by `START_ROW`).
+模式变量 `PRICE_DOWN` 定义为价格小于满足 `PRICE_DOWN` 条件的最后一行。对于初始情况或没有满足 `PRICE_DOWN` 
条件的最后一行时,该行的价格应小于该模式中前一行(由 `START_ROW` 引用)的价格。
 
-A pattern variable `PRICE_UP` is defined as a row with a price that is larger 
than the price of the
-last row that met the `PRICE_DOWN` condition.
+模式变量 `PRICE_UP` 定义为价格大于满足 `PRICE_DOWN` 条件的最后一行。
 
-This query produces a summary row for each period in which the price of a 
stock was continuously
-decreasing.
+此查询为股票价格持续下跌的每个期间生成摘要行。
 
-The exact representation of the output rows is defined in the `MEASURES` part 
of the query. The
-number of output rows is defined by the `ONE ROW PER MATCH` output mode.
+在查询的 `MEASURES` 子句部分定义确切的输出行信息。输出行数由 `ONE ROW PER MATCH` 输出方式定义。
 
 {% highlight text %}
  symbol       start_tstamp       bottom_tstamp         end_tstamp
 =========  ==================  ==================  ==================
 ACME       01-APR-11 10:00:04  01-APR-11 10:00:07  01-APR-11 10:00:08
 {% endhighlight %}
 
-The resulting row describes a period of falling prices that started at 
`01-APR-11 10:00:04` and
-achieved the lowest price at `01-APR-11 10:00:07` that increased again at 
`01-APR-11 10:00:08`.
+该行结果描述了从 `01-APR-11 10:00:04` 开始的价格下跌期,在 `01-APR-11 10:00:07` 达到最低价格,到 
`01-APR-11 10:00:08` 再次上涨。
 
-Partitioning
+<a name="partitioning"></a>
+
+分区
 ------------
 
-It is possible to look for patterns in partitioned data, e.g., trends for a 
single ticker or a
-particular user. This can be expressed using the `PARTITION BY` clause. The 
clause is similar to
-using `GROUP BY` for aggregations.
+可以在分区数据中寻找模式,例如单个股票行情或特定用户的趋势。这可以用 `PARTITION BY` 子句来表示。该子句类似于对聚合使用 `GROUP BY`。
+
+<span class="label label-danger">注意</span> 强烈建议对传入的数据进行分区,否则 `MATCH_RECOGNIZE` 
子句将被转换为非并行算子,以确保全局排序。
 
-<span class="label label-danger">Attention</span> It is highly advised to 
partition the incoming
-data because otherwise the `MATCH_RECOGNIZE` clause will be translated into a 
non-parallel operator
-to ensure global ordering.
+<a name="order-of-events"></a>
 
-Order of Events
+事件顺序
 ---------------
 
-Apache Flink allows for searching for patterns based on time; either
-[processing time or event time](time_attributes.html).
+Apache Flink 可以根据时间([处理时间或者事件时间]({% link 
dev/table/streaming/time_attributes.zh.md %}))进行模式搜索。
 
-In case of event time, the events are sorted before they are passed to the 
internal pattern state
-machine. As a consequence, the produced output will be correct regardless of 
the order in which
-rows are appended to the table. Instead, the pattern is evaluated in the order 
specified by the
-time contained in each row.
+如果是事件时间,则在将事件传递到内部模式状态机之前对其进行排序。所以,无论行添加到表的顺序如何,生成的输出都是正确的。相反,模式是按照每行中包含的时间指定的顺序计算的。
 
-The `MATCH_RECOGNIZE` clause assumes a [time attribute](time_attributes.html) 
with ascending
-ordering as the first argument to `ORDER BY` clause.
+`MATCH_RECOGNIZE` 子句假定升序的 [时间属性]({% link 
dev/table/streaming/time_attributes.zh.md %}) 是 `ORDER BY` 子句的第一个参数。
 
-For the example `Ticker` table, a definition like `ORDER BY rowtime ASC, price 
DESC` is valid but
-`ORDER BY price, rowtime` or `ORDER BY rowtime DESC, price ASC` is not.
+对于示例 `Ticker` 表,诸如 `ORDER BY rowtime ASC, price DESC` 的定义是有效的,但 `ORDER BY 
price, rowtime` 或者 `ORDER BY rowtime DESC, price ASC` 是无效的。
 
 Define & Measures
 -----------------
 
-The `DEFINE` and `MEASURES` keywords have similar meanings to the `WHERE` and 
`SELECT` clauses in a
-simple SQL query.
+`DEFINE` 和 `MEASURES` 关键字与简单 SQL 查询中的 `WHERE` 和 `SELECT` 子句具有相近的含义。
 
-The `MEASURES` clause defines what will be included in the output of a 
matching pattern. It can
-project columns and define expressions for evaluation. The number of produced 
rows depends on the
-[output mode](#output-mode) setting.
+`MEASURES` 子句定义匹配模式的输出中要包含哪些内容。它可以投影列并定义表达式进行计算。产生的行数取决于[输出方式](#output-mode)设置。
 
-The `DEFINE` clause specifies conditions that rows have to fulfill in order to 
be classified to a
-corresponding [pattern variable](#defining-a-pattern). If a condition is not 
defined for a pattern
-variable, a default condition will be used which evaluates to `true` for every 
row.
+`DEFINE` 
子句指定行必须满足的条件才能被分类到相应的[模式变量](#defining-a-pattern)。如果没有为模式变量定义条件,则将使用对每一行的计算结果为 
`true` 的默认条件。

Review comment:
       ```suggestion
   `DEFINE` 
子句指定行必须满足的条件才能被分类到相应的[模式变量](#defining-a-pattern)。如果没有为模式变量定义条件,则将对每一行使用计算结果为 
`true` 的默认条件。
   ```

##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -323,52 +268,47 @@ ACME       01-APR-11 10:00:00  01-APR-11 10:00:03     14.5
 ACME       01-APR-11 10:00:05  01-APR-11 10:00:10     13.5
 {% endhighlight %}
 
-<span class="label label-info">Note</span> Aggregations can be applied to 
expressions, but only if
-they reference a single pattern variable. Thus `SUM(A.price * A.tax)` is a 
valid one, but
-`AVG(A.price * B.tax)` is not.
+<span class="label label-info">注意</span> 
聚合可以应用于表达式,但前提是它们引用单个模式变量。因此,`SUM(A.price * A.tax)` 是有效的,而 `AVG(A.price * 
B.tax)` 则是无效的。
 
-<span class="label label-danger">Attention</span> `DISTINCT` aggregations are 
not supported.
+<span class="label label-danger">注意</span> 不支持 `DISTINCT` 聚合。

Review comment:
       ```suggestion
   <span class="label label-danger">注意</span> 不支持 `DISTINCT` aggregation。
   ```

##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -1091,42 +983,31 @@ DEFINE
   C as C.price > 20
 {% endhighlight %}
 
-<span class="label label-danger">Attention</span> Please note that the 
`MATCH_RECOGNIZE` clause
-does not use a configured [state retention 
time](query_configuration.html#idle-state-retention-time).
-One may want to use the `WITHIN` [clause](#time-constraint) for this purpose.
+<span class="label label-danger">注意</span> 请注意,`MATCH_RECOGNIZE` 子句未使用配置的 
[state retention time]({% link dev/table/streaming/query_configuration.zh.md 
%}#idle-state-retention-time)。为此,可能需要使用 `WITHIN` [clause](#known-limitations)。

Review comment:
       ```suggestion
   <span class="label label-danger">注意</span> 请注意,`MATCH_RECOGNIZE` 子句未使用配置的 
[state retention time]({% link dev/table/streaming/query_configuration.zh.md 
%}#idle-state-retention-time)。为此,可能需要使用 `WITHIN` [子句](#time-constraint)。
   ```

##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -180,99 +158,68 @@ FROM Ticker
     ) MR;
 {% endhighlight %}
 
-The query partitions the `Ticker` table by the `symbol` column and orders it 
by the `rowtime`
-time attribute.
+此查询将 `Ticker` 表按照 `symbol` 列进行分区并按照 `rowtime` 属性进行排序。
 
-The `PATTERN` clause specifies that we are interested in a pattern with a 
starting event `START_ROW`
-that is followed by one or more `PRICE_DOWN` events and concluded with a 
`PRICE_UP` event. If such
-a pattern can be found, the next pattern match will be seeked at the last 
`PRICE_UP` event as
-indicated by the `AFTER MATCH SKIP TO LAST` clause.
+`PATTERN` 子句指定我们对以下模式感兴趣:该模式具有开始事件 `START_ROW`,然后是一个或多个 `PRICE_DOWN` 事件,并以 
`PRICE_UP` 事件结束。如果可以找到这样的模式,如 `AFTER MATCH SKIP TO LAST` 子句所示,则从最后一个 `PRICE_UP` 
事件开始寻找下一个模式匹配。
 
-The `DEFINE` clause specifies the conditions that need to be met for a 
`PRICE_DOWN` and `PRICE_UP`
-event. Although the `START_ROW` pattern variable is not present it has an 
implicit condition that
-is evaluated always as `TRUE`.
+`DEFINE` 子句指定 `PRICE_DOWN` 和 `PRICE_UP` 事件需要满足的条件。尽管不存在 `START_ROW` 
模式变量,但它具有一个始终被评估为 `TRUE` 隐式条件。
 
-A pattern variable `PRICE_DOWN` is defined as a row with a price that is 
smaller than the price of
-the last row that met the `PRICE_DOWN` condition. For the initial case or when 
there is no last row
-that met the `PRICE_DOWN` condition, the price of the row should be smaller 
than the price of the
-preceding row in the pattern (referenced by `START_ROW`).
+模式变量 `PRICE_DOWN` 定义为价格小于满足 `PRICE_DOWN` 条件的最后一行。对于初始情况或没有满足 `PRICE_DOWN` 
条件的最后一行时,该行的价格应小于该模式中前一行(由 `START_ROW` 引用)的价格。
 
-A pattern variable `PRICE_UP` is defined as a row with a price that is larger 
than the price of the
-last row that met the `PRICE_DOWN` condition.
+模式变量 `PRICE_UP` 定义为价格大于满足 `PRICE_DOWN` 条件的最后一行。
 
-This query produces a summary row for each period in which the price of a 
stock was continuously
-decreasing.
+此查询为股票价格持续下跌的每个期间生成摘要行。
 
-The exact representation of the output rows is defined in the `MEASURES` part 
of the query. The
-number of output rows is defined by the `ONE ROW PER MATCH` output mode.
+在查询的 `MEASURES` 子句部分定义确切的输出行信息。输出行数由 `ONE ROW PER MATCH` 输出方式定义。
 
 {% highlight text %}
  symbol       start_tstamp       bottom_tstamp         end_tstamp
 =========  ==================  ==================  ==================
 ACME       01-APR-11 10:00:04  01-APR-11 10:00:07  01-APR-11 10:00:08
 {% endhighlight %}
 
-The resulting row describes a period of falling prices that started at 
`01-APR-11 10:00:04` and
-achieved the lowest price at `01-APR-11 10:00:07` that increased again at 
`01-APR-11 10:00:08`.
+该行结果描述了从 `01-APR-11 10:00:04` 开始的价格下跌期,在 `01-APR-11 10:00:07` 达到最低价格,到 
`01-APR-11 10:00:08` 再次上涨。
 
-Partitioning
+<a name="partitioning"></a>
+
+分区
 ------------
 
-It is possible to look for patterns in partitioned data, e.g., trends for a 
single ticker or a
-particular user. This can be expressed using the `PARTITION BY` clause. The 
clause is similar to
-using `GROUP BY` for aggregations.
+可以在分区数据中寻找模式,例如单个股票行情或特定用户的趋势。这可以用 `PARTITION BY` 子句来表示。该子句类似于对聚合使用 `GROUP BY`。
+
+<span class="label label-danger">注意</span> 强烈建议对传入的数据进行分区,否则 `MATCH_RECOGNIZE` 
子句将被转换为非并行算子,以确保全局排序。
 
-<span class="label label-danger">Attention</span> It is highly advised to 
partition the incoming
-data because otherwise the `MATCH_RECOGNIZE` clause will be translated into a 
non-parallel operator
-to ensure global ordering.
+<a name="order-of-events"></a>
 
-Order of Events
+事件顺序
 ---------------
 
-Apache Flink allows for searching for patterns based on time; either
-[processing time or event time](time_attributes.html).
+Apache Flink 可以根据时间([处理时间或者事件时间]({% link 
dev/table/streaming/time_attributes.zh.md %}))进行模式搜索。
 
-In case of event time, the events are sorted before they are passed to the 
internal pattern state
-machine. As a consequence, the produced output will be correct regardless of 
the order in which
-rows are appended to the table. Instead, the pattern is evaluated in the order 
specified by the
-time contained in each row.
+如果是事件时间,则在将事件传递到内部模式状态机之前对其进行排序。所以,无论行添加到表的顺序如何,生成的输出都是正确的。相反,模式是按照每行中包含的时间指定的顺序计算的。

Review comment:
       ```suggestion
   
如果是事件时间,则在将事件传递到内部模式状态机之前对其进行排序。所以,无论行添加到表的顺序如何,生成的输出都是正确的。而模式是按照每行中所包含的时间指定顺序计算的。
   ```

##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -65,24 +54,23 @@ FROM MyTable
     ) AS T
 {% endhighlight %}
 
-This page will explain each keyword in more detail and will illustrate more 
complex examples.
+本页将更详细地解释每个关键字,并演示说明更复杂的示例。
 
-<span class="label label-danger">Attention</span> Flink's implementation of 
the `MATCH_RECOGNIZE`
-clause is a subset of the full standard. Only those features documented in the 
following sections
-are supported. Since the development is still in an early phase, please also 
take a look at the
-[known limitations](#known-limitations).
+<span class="label label-danger">注意</span> Flink 的 `MATCH_RECOGNIZE` 
子句实现是完整标准的一个子集。仅支持以下部分中记录的功能。由于开发仍处于初期阶段,请查看[已知的局限](#known-limitations)。

Review comment:
       ```suggestion
   <span class="label label-danger">注意</span> Flink 的 `MATCH_RECOGNIZE` 
子句实现是一个完整标准子集。仅支持以下部分中记录的功能。目前开发仍处于初期阶段,请查看[已知的局限](#known-limitations)。
   ```

##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -24,28 +24,17 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-It is a common use case to search for a set of event patterns, especially in 
case of data streams.
-Flink comes with a [complex event processing (CEP) library]({{ site.baseurl 
}}/dev/libs/cep.html)
-which allows for pattern detection in event streams. Furthermore, Flink's SQL 
API provides a
-relational way of expressing queries with a large set of built-in functions 
and rule-based
-optimizations that can be used out of the box.
-
-In December 2016, the International Organization for Standardization (ISO) 
released a new version
-of the SQL standard which includes _Row Pattern Recognition in SQL_
-([ISO/IEC TR 
19075-5:2016](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip)).
-It allows Flink to consolidate CEP and SQL API using the `MATCH_RECOGNIZE` 
clause for complex event
-processing in SQL.
-
-A `MATCH_RECOGNIZE` clause enables the following tasks:
-* Logically partition and order the data that is used with the `PARTITION BY` 
and `ORDER BY`
-  clauses.
-* Define patterns of rows to seek using the `PATTERN` clause. These patterns 
use a syntax similar to
-  that of regular expressions.
-* The logical components of the row pattern variables are specified in the 
`DEFINE` clause.
-* Define measures, which are expressions usable in other parts of the SQL 
query, in the `MEASURES`
-  clause.
-
-The following example illustrates the syntax for basic pattern recognition:
+搜索一组事件模式(event pattern)是一种常见的用例,尤其是在数据流的情况下。Flink 提供[复杂事件处理(CEP)库]({% link 
dev/libs/cep.zh.md %}),该库允许在事件流中进行模式检测。此外,Flink 的 SQL API 
提供了一种关系式的查询表达方式,其中包含大量内置函数和基于规则的优化,可以开箱即用。

Review comment:
       ```suggestion
   搜索一组事件模式(event pattern)是一种常见的用例,尤其是在数据流情景中。Flink 提供[复杂事件处理(CEP)库]({% link 
dev/libs/cep.zh.md %}),该库允许在事件流中进行模式检测。此外,Flink 的 SQL API 
提供了一种关系式的查询表达方式,其中包含大量内置函数和基于规则的优化,可以开箱即用。
   ```

##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -24,28 +24,17 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-It is a common use case to search for a set of event patterns, especially in 
case of data streams.
-Flink comes with a [complex event processing (CEP) library]({{ site.baseurl 
}}/dev/libs/cep.html)
-which allows for pattern detection in event streams. Furthermore, Flink's SQL 
API provides a
-relational way of expressing queries with a large set of built-in functions 
and rule-based
-optimizations that can be used out of the box.
-
-In December 2016, the International Organization for Standardization (ISO) 
released a new version
-of the SQL standard which includes _Row Pattern Recognition in SQL_
-([ISO/IEC TR 
19075-5:2016](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip)).
-It allows Flink to consolidate CEP and SQL API using the `MATCH_RECOGNIZE` 
clause for complex event
-processing in SQL.
-
-A `MATCH_RECOGNIZE` clause enables the following tasks:
-* Logically partition and order the data that is used with the `PARTITION BY` 
and `ORDER BY`
-  clauses.
-* Define patterns of rows to seek using the `PATTERN` clause. These patterns 
use a syntax similar to
-  that of regular expressions.
-* The logical components of the row pattern variables are specified in the 
`DEFINE` clause.
-* Define measures, which are expressions usable in other parts of the SQL 
query, in the `MEASURES`
-  clause.
-
-The following example illustrates the syntax for basic pattern recognition:
+搜索一组事件模式(event pattern)是一种常见的用例,尤其是在数据流的情况下。Flink 提供[复杂事件处理(CEP)库]({% link 
dev/libs/cep.zh.md %}),该库允许在事件流中进行模式检测。此外,Flink 的 SQL API 
提供了一种关系式的查询表达方式,其中包含大量内置函数和基于规则的优化,可以开箱即用。
+
+2016 年 12 月,国际标准化组织(ISO)发布了新版本的 SQL 标准,其中包括在 _SQL 中的行模式识别(Row Pattern 
Recognition in SQL)_([ISO/IEC TR 
19075-5:2016](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip))。它允许
 Flink 使用 `MATCH_RECOGNIZE` 子句融合 CEP 和 SQL API,以便在 SQL 中进行复杂事件处理。
+
+`MATCH_RECOGNIZE` 子句启用以下任务:
+* 使用 `PARTITION BY` 和 `ORDER BY` 子句对数据进行逻辑分区和排序。
+* 使用 `PATTERN` 子句定义要查找的行的模式。这些模式使用类似于正则表达式的语法。

Review comment:
       ```suggestion
   * 使用 `PATTERN` 子句定义要查找的行模式。这些模式使用类似于正则表达式的语法。
   ```

##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -805,14 +716,14 @@ The table consists of the following columns:
       <td></td>
       <td>31</td>
       <td>20</td>
-      <td>Not mapped because <code>35 &lt; 2 * 20</code>.</td>
+      <td>因为 <code>35 &lt; 2 * 20</code> 没有映射。</td>
     </tr>
   </tbody>
 </table>
 
-It might also make sense to use the default pattern variable with logical 
offsets.
+将默认模式变量与逻辑偏移量一起使用也可能很有意义。
 
-In this case, an offset considers all the rows mapped so far:
+在这种情况下,偏移量会包含到目前为止映射的所有行:

Review comment:
       ```suggestion
   在这种情况下,offset 会包含到目前为止映射的所有行:
   ```

##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -323,52 +268,47 @@ ACME       01-APR-11 10:00:00  01-APR-11 10:00:03     14.5
 ACME       01-APR-11 10:00:05  01-APR-11 10:00:10     13.5
 {% endhighlight %}
 
-<span class="label label-info">Note</span> Aggregations can be applied to 
expressions, but only if
-they reference a single pattern variable. Thus `SUM(A.price * A.tax)` is a 
valid one, but
-`AVG(A.price * B.tax)` is not.
+<span class="label label-info">注意</span> 
聚合可以应用于表达式,但前提是它们引用单个模式变量。因此,`SUM(A.price * A.tax)` 是有效的,而 `AVG(A.price * 
B.tax)` 则是无效的。

Review comment:
       ```suggestion
   <span class="label label-info">注意</span> Aggregation 
可以应用于表达式,但前提是它们引用单个模式变量。因此,`SUM(A.price * A.tax)` 是有效的,而 `AVG(A.price * B.tax)` 
则是无效的。
   ```

##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -180,99 +158,68 @@ FROM Ticker
     ) MR;
 {% endhighlight %}
 
-The query partitions the `Ticker` table by the `symbol` column and orders it 
by the `rowtime`
-time attribute.
+此查询将 `Ticker` 表按照 `symbol` 列进行分区并按照 `rowtime` 属性进行排序。
 
-The `PATTERN` clause specifies that we are interested in a pattern with a 
starting event `START_ROW`
-that is followed by one or more `PRICE_DOWN` events and concluded with a 
`PRICE_UP` event. If such
-a pattern can be found, the next pattern match will be seeked at the last 
`PRICE_UP` event as
-indicated by the `AFTER MATCH SKIP TO LAST` clause.
+`PATTERN` 子句指定我们对以下模式感兴趣:该模式具有开始事件 `START_ROW`,然后是一个或多个 `PRICE_DOWN` 事件,并以 
`PRICE_UP` 事件结束。如果可以找到这样的模式,如 `AFTER MATCH SKIP TO LAST` 子句所示,则从最后一个 `PRICE_UP` 
事件开始寻找下一个模式匹配。
 
-The `DEFINE` clause specifies the conditions that need to be met for a 
`PRICE_DOWN` and `PRICE_UP`
-event. Although the `START_ROW` pattern variable is not present it has an 
implicit condition that
-is evaluated always as `TRUE`.
+`DEFINE` 子句指定 `PRICE_DOWN` 和 `PRICE_UP` 事件需要满足的条件。尽管不存在 `START_ROW` 
模式变量,但它具有一个始终被评估为 `TRUE` 隐式条件。
 
-A pattern variable `PRICE_DOWN` is defined as a row with a price that is 
smaller than the price of
-the last row that met the `PRICE_DOWN` condition. For the initial case or when 
there is no last row
-that met the `PRICE_DOWN` condition, the price of the row should be smaller 
than the price of the
-preceding row in the pattern (referenced by `START_ROW`).
+模式变量 `PRICE_DOWN` 定义为价格小于满足 `PRICE_DOWN` 条件的最后一行。对于初始情况或没有满足 `PRICE_DOWN` 
条件的最后一行时,该行的价格应小于该模式中前一行(由 `START_ROW` 引用)的价格。
 
-A pattern variable `PRICE_UP` is defined as a row with a price that is larger 
than the price of the
-last row that met the `PRICE_DOWN` condition.
+模式变量 `PRICE_UP` 定义为价格大于满足 `PRICE_DOWN` 条件的最后一行。
 
-This query produces a summary row for each period in which the price of a 
stock was continuously
-decreasing.
+此查询为股票价格持续下跌的每个期间生成摘要行。
 
-The exact representation of the output rows is defined in the `MEASURES` part 
of the query. The
-number of output rows is defined by the `ONE ROW PER MATCH` output mode.
+在查询的 `MEASURES` 子句部分定义确切的输出行信息。输出行数由 `ONE ROW PER MATCH` 输出方式定义。
 
 {% highlight text %}
  symbol       start_tstamp       bottom_tstamp         end_tstamp
 =========  ==================  ==================  ==================
 ACME       01-APR-11 10:00:04  01-APR-11 10:00:07  01-APR-11 10:00:08
 {% endhighlight %}
 
-The resulting row describes a period of falling prices that started at 
`01-APR-11 10:00:04` and
-achieved the lowest price at `01-APR-11 10:00:07` that increased again at 
`01-APR-11 10:00:08`.
+该行结果描述了从 `01-APR-11 10:00:04` 开始的价格下跌期,在 `01-APR-11 10:00:07` 达到最低价格,到 
`01-APR-11 10:00:08` 再次上涨。
 
-Partitioning
+<a name="partitioning"></a>
+
+分区
 ------------
 
-It is possible to look for patterns in partitioned data, e.g., trends for a 
single ticker or a
-particular user. This can be expressed using the `PARTITION BY` clause. The 
clause is similar to
-using `GROUP BY` for aggregations.
+可以在分区数据中寻找模式,例如单个股票行情或特定用户的趋势。这可以用 `PARTITION BY` 子句来表示。该子句类似于对聚合使用 `GROUP BY`。
+
+<span class="label label-danger">注意</span> 强烈建议对传入的数据进行分区,否则 `MATCH_RECOGNIZE` 
子句将被转换为非并行算子,以确保全局排序。
 
-<span class="label label-danger">Attention</span> It is highly advised to 
partition the incoming
-data because otherwise the `MATCH_RECOGNIZE` clause will be translated into a 
non-parallel operator
-to ensure global ordering.
+<a name="order-of-events"></a>
 
-Order of Events
+事件顺序
 ---------------
 
-Apache Flink allows for searching for patterns based on time; either
-[processing time or event time](time_attributes.html).
+Apache Flink 可以根据时间([处理时间或者事件时间]({% link 
dev/table/streaming/time_attributes.zh.md %}))进行模式搜索。
 
-In case of event time, the events are sorted before they are passed to the 
internal pattern state
-machine. As a consequence, the produced output will be correct regardless of 
the order in which
-rows are appended to the table. Instead, the pattern is evaluated in the order 
specified by the
-time contained in each row.
+如果是事件时间,则在将事件传递到内部模式状态机之前对其进行排序。所以,无论行添加到表的顺序如何,生成的输出都是正确的。相反,模式是按照每行中包含的时间指定的顺序计算的。
 
-The `MATCH_RECOGNIZE` clause assumes a [time attribute](time_attributes.html) 
with ascending
-ordering as the first argument to `ORDER BY` clause.
+`MATCH_RECOGNIZE` 子句假定升序的 [时间属性]({% link 
dev/table/streaming/time_attributes.zh.md %}) 是 `ORDER BY` 子句的第一个参数。
 
-For the example `Ticker` table, a definition like `ORDER BY rowtime ASC, price 
DESC` is valid but
-`ORDER BY price, rowtime` or `ORDER BY rowtime DESC, price ASC` is not.
+对于示例 `Ticker` 表,诸如 `ORDER BY rowtime ASC, price DESC` 的定义是有效的,但 `ORDER BY 
price, rowtime` 或者 `ORDER BY rowtime DESC, price ASC` 是无效的。
 
 Define & Measures
 -----------------
 
-The `DEFINE` and `MEASURES` keywords have similar meanings to the `WHERE` and 
`SELECT` clauses in a
-simple SQL query.
+`DEFINE` 和 `MEASURES` 关键字与简单 SQL 查询中的 `WHERE` 和 `SELECT` 子句具有相近的含义。
 
-The `MEASURES` clause defines what will be included in the output of a 
matching pattern. It can
-project columns and define expressions for evaluation. The number of produced 
rows depends on the
-[output mode](#output-mode) setting.
+`MEASURES` 子句定义匹配模式的输出中要包含哪些内容。它可以投影列并定义表达式进行计算。产生的行数取决于[输出方式](#output-mode)设置。
 
-The `DEFINE` clause specifies conditions that rows have to fulfill in order to 
be classified to a
-corresponding [pattern variable](#defining-a-pattern). If a condition is not 
defined for a pattern
-variable, a default condition will be used which evaluates to `true` for every 
row.
+`DEFINE` 
子句指定行必须满足的条件才能被分类到相应的[模式变量](#defining-a-pattern)。如果没有为模式变量定义条件,则将使用对每一行的计算结果为 
`true` 的默认条件。
 
-For a more detailed explanation about expressions that can be used in those 
clauses, please have a
-look at the [event stream navigation](#pattern-navigation) section.
+有关在这些子句中可使用的表达式的更详细的说明,请查看[事件流导航](#pattern-navigation)部分。
 
 ### Aggregations
 
-Aggregations can be used in `DEFINE` and `MEASURES` clauses. Both
-[built-in]({{ site.baseurl }}/dev/table/functions/systemFunctions.html) and 
custom
-[user defined]({{ site.baseurl }}/dev/table/functions/udfs.html) functions are 
supported.
+Aggregations 可以在 `DEFINE` 和 `MEASURES` 子句中使用。支持[内置函数]({% link 
dev/table/functions/systemFunctions.zh.md %})和[用户自定义函数]({% link 
dev/table/functions/udfs.zh.md %})。
 
-Aggregate functions are applied to each subset of rows mapped to a match. In 
order to understand
-how those subsets are evaluated have a look at the [event stream 
navigation](#pattern-navigation)
-section.
+对相应匹配项的行子集可以使用 Aggregate 
functions。请查看[事件流导航](#pattern-navigation)部分以了解如何计算这些子集。
 
-The task of the following example is to find the longest period of time for 
which the average price
-of a ticker did not go below certain threshold. It shows how expressible 
`MATCH_RECOGNIZE` can
-become with aggregations. This task can be performed with the following query:
+下面这个示例的任务是找出股票平均价格没有低于某个阈值的最长时间段。它展示了 `MATCH_RECOGNIZE` 
在聚合中的可表达性。可以使用以下查询执行此任务:

Review comment:
       ```suggestion
   下面这个示例的任务是找出股票平均价格没有低于某个阈值的最长时间段。它展示了 `MATCH_RECOGNIZE` 在 aggregation 
中的可表达性。可以使用以下查询执行此任务:
   ```

##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -805,14 +716,14 @@ The table consists of the following columns:
       <td></td>
       <td>31</td>
       <td>20</td>
-      <td>Not mapped because <code>35 &lt; 2 * 20</code>.</td>
+      <td>因为 <code>35 &lt; 2 * 20</code> 没有映射。</td>
     </tr>
   </tbody>
 </table>
 
-It might also make sense to use the default pattern variable with logical 
offsets.
+将默认模式变量与逻辑偏移量一起使用也可能很有意义。

Review comment:
       ```suggestion
   将默认模式变量与 logical offsets 一起使用也可能很有意义。
   ```

##########
File path: docs/dev/table/streaming/match_recognize.zh.md
##########
@@ -92,52 +80,43 @@ project.
 </dependency>
 {% endhighlight %}
 
-Alternatively, you can also add the dependency to the cluster classpath (see 
the
-[dependency section]({{ site.baseurl}}/dev/project-configuration.html) for 
more information).
+或者,也可以将依赖项添加到集群 classpath(查看 [dependency section]({% link 
dev/project-configuration.zh.md %}) 获取更多相关依赖信息)。

Review comment:
       ```suggestion
   或者,也可以将依赖项添加到集群的 classpath(查看 [dependency section]({% link 
dev/project-configuration.zh.md %}) 获取更多相关依赖信息)。
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to