twalthr commented on a change in pull request #14053:
URL: https://github.com/apache/flink/pull/14053#discussion_r528738848



##########
File path: docs/dev/table/index.md
##########
@@ -25,93 +25,106 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch processing. The Table API is a language-integrated 
query API for Scala and Java that allows the composition of queries from 
relational operators such as selection, filter, and join in a very intuitive 
way. Flink's SQL support is based on [Apache 
Calcite](https://calcite.apache.org) which implements the SQL standard. Queries 
specified in either interface have the same semantics and specify the same 
result regardless whether the input is a batch input (DataSet) or a stream 
input (DataStream).
-
-The Table API and the SQL interfaces are tightly integrated with each other as 
well as Flink's DataStream and DataSet APIs. You can easily switch between all 
APIs and libraries which build upon the APIs. For instance, you can extract 
patterns from a DataStream using the [CEP library]({{ site.baseurl 
}}/dev/libs/cep.html) and later use the Table API to analyze the patterns, or 
you might scan, filter, and aggregate a batch table using a SQL query before 
running a [Gelly graph algorithm]({{ site.baseurl }}/dev/libs/gelly) on the 
preprocessed data.
-
-**Please note that the Table API and SQL are not yet feature complete and are 
being actively developed. Not all operations are supported by every combination 
of \[Table API, SQL\] and \[stream, batch\] input.**
-
-Dependency Structure
---------------------
-
-Starting from Flink 1.9, Flink provides two different planner implementations 
for evaluating Table & SQL API programs: the Blink planner and the old planner 
that was available before Flink 1.9. Planners are responsible for
-translating relational operators into an executable, optimized Flink job. Both 
of the planners come with different optimization rules and runtime classes.
-They may also differ in the set of supported features.
-
-<span class="label label-danger">Attention</span> For production use cases, we 
recommend the blink planner that has become the default planner since 1.11.
-
-All Table API and SQL components are bundled in the `flink-table` or 
`flink-table-blink` Maven artifacts.
-
-The following dependencies are relevant for most projects:
-
-* `flink-table-common`: A common module for extending the table ecosystem by 
custom functions, formats, etc.
-* `flink-table-api-java`: The Table & SQL API for pure table programs using 
the Java programming language (in early development stage, not recommended!).
-* `flink-table-api-scala`: The Table & SQL API for pure table programs using 
the Scala programming language (in early development stage, not recommended!).
-* `flink-table-api-java-bridge`: The Table & SQL API with DataStream/DataSet 
API support using the Java programming language.
-* `flink-table-api-scala-bridge`: The Table & SQL API with DataStream/DataSet 
API support using the Scala programming language.
-* `flink-table-planner`: The table program planner and runtime. This was the 
only planner of Flink before the 1.9 release. It's no longer recommended since 
Flink 1.11.
-* `flink-table-planner-blink`: The new Blink planner, which has become the 
default one since Flink 1.11.
-* `flink-table-runtime-blink`: The new Blink runtime.
-* `flink-table-uber`: Packages the API modules above plus the old planner into 
a distribution for most Table & SQL API use cases. The uber JAR file 
`flink-table-*.jar` is located in the `/lib` directory of a Flink release by 
default.
-* `flink-table-uber-blink`: Packages the API modules above plus the Blink 
specific modules into a distribution for most Table & SQL API use cases. The 
uber JAR file `flink-table-blink-*.jar` is located in the `/lib` directory of a 
Flink release by default.
-
-See the [common API](common.html) page for more information about how to 
switch between the old and new Blink planner in table programs.
+Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch
+processing. The Table API is a language-integrated query API for Java, Scala, 
and Python that
+allows the composition of queries from relational operators such as selection, 
filter, and join in
+a very intuitive way. Flink's SQL support is based on [Apache 
Calcite](https://calcite.apache.org)
+which implements the SQL standard. Queries specified in either interface have 
the same semantics
+and specify the same result regardless of whether the input is continuous 
(streaming) or bounded (batch).
+
+The Table API and SQL interfaces integrate seamlessly with each other and 
Flink's DataStream API. 
+You can easily switch between all APIs and libraries which build upon them.
+For instance, you can extract patterns from a Table using [Match Recognize]({% 
link dev/table/streaming/match_recognize.md %})
+and later use the DataStream API to build alerting based on the matched 
patterns.
+
+Table Planners
+--------------
+
+Table planners are responsible for translating relational operators into an 
executable, optimized Flink job.
+Flink supports two different planner implementations; the modern Blink planner 
and the legacy planner.
+For production use cases, we recommend the blink planner which has been the 
default planner since 1.11.

Review comment:
       ```suggestion
   For production use cases, we recommend the Blink planner which has been the 
default planner since 1.11.
   ```

##########
File path: docs/dev/table/index.md
##########
@@ -25,93 +25,106 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch processing. The Table API is a language-integrated 
query API for Scala and Java that allows the composition of queries from 
relational operators such as selection, filter, and join in a very intuitive 
way. Flink's SQL support is based on [Apache 
Calcite](https://calcite.apache.org) which implements the SQL standard. Queries 
specified in either interface have the same semantics and specify the same 
result regardless whether the input is a batch input (DataSet) or a stream 
input (DataStream).
-
-The Table API and the SQL interfaces are tightly integrated with each other as 
well as Flink's DataStream and DataSet APIs. You can easily switch between all 
APIs and libraries which build upon the APIs. For instance, you can extract 
patterns from a DataStream using the [CEP library]({{ site.baseurl 
}}/dev/libs/cep.html) and later use the Table API to analyze the patterns, or 
you might scan, filter, and aggregate a batch table using a SQL query before 
running a [Gelly graph algorithm]({{ site.baseurl }}/dev/libs/gelly) on the 
preprocessed data.
-
-**Please note that the Table API and SQL are not yet feature complete and are 
being actively developed. Not all operations are supported by every combination 
of \[Table API, SQL\] and \[stream, batch\] input.**
-
-Dependency Structure
---------------------
-
-Starting from Flink 1.9, Flink provides two different planner implementations 
for evaluating Table & SQL API programs: the Blink planner and the old planner 
that was available before Flink 1.9. Planners are responsible for
-translating relational operators into an executable, optimized Flink job. Both 
of the planners come with different optimization rules and runtime classes.
-They may also differ in the set of supported features.
-
-<span class="label label-danger">Attention</span> For production use cases, we 
recommend the blink planner that has become the default planner since 1.11.
-
-All Table API and SQL components are bundled in the `flink-table` or 
`flink-table-blink` Maven artifacts.
-
-The following dependencies are relevant for most projects:
-
-* `flink-table-common`: A common module for extending the table ecosystem by 
custom functions, formats, etc.
-* `flink-table-api-java`: The Table & SQL API for pure table programs using 
the Java programming language (in early development stage, not recommended!).
-* `flink-table-api-scala`: The Table & SQL API for pure table programs using 
the Scala programming language (in early development stage, not recommended!).
-* `flink-table-api-java-bridge`: The Table & SQL API with DataStream/DataSet 
API support using the Java programming language.
-* `flink-table-api-scala-bridge`: The Table & SQL API with DataStream/DataSet 
API support using the Scala programming language.
-* `flink-table-planner`: The table program planner and runtime. This was the 
only planner of Flink before the 1.9 release. It's no longer recommended since 
Flink 1.11.
-* `flink-table-planner-blink`: The new Blink planner, which has become the 
default one since Flink 1.11.
-* `flink-table-runtime-blink`: The new Blink runtime.
-* `flink-table-uber`: Packages the API modules above plus the old planner into 
a distribution for most Table & SQL API use cases. The uber JAR file 
`flink-table-*.jar` is located in the `/lib` directory of a Flink release by 
default.
-* `flink-table-uber-blink`: Packages the API modules above plus the Blink 
specific modules into a distribution for most Table & SQL API use cases. The 
uber JAR file `flink-table-blink-*.jar` is located in the `/lib` directory of a 
Flink release by default.
-
-See the [common API](common.html) page for more information about how to 
switch between the old and new Blink planner in table programs.
+Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch
+processing. The Table API is a language-integrated query API for Java, Scala, 
and Python that
+allows the composition of queries from relational operators such as selection, 
filter, and join in
+a very intuitive way. Flink's SQL support is based on [Apache 
Calcite](https://calcite.apache.org)
+which implements the SQL standard. Queries specified in either interface have 
the same semantics
+and specify the same result regardless of whether the input is continuous 
(streaming) or bounded (batch).
+
+The Table API and SQL interfaces integrate seamlessly with each other and 
Flink's DataStream API. 
+You can easily switch between all APIs and libraries which build upon them.
+For instance, you can extract patterns from a Table using [Match Recognize]({% 
link dev/table/streaming/match_recognize.md %})

Review comment:
       ```suggestion
   For instance, you can detect patterns from a table using [`MATCH_RECOGNIZE` 
clause]({% link dev/table/streaming/match_recognize.md %})
   ```

##########
File path: docs/dev/table/index.md
##########
@@ -25,93 +25,106 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch processing. The Table API is a language-integrated 
query API for Scala and Java that allows the composition of queries from 
relational operators such as selection, filter, and join in a very intuitive 
way. Flink's SQL support is based on [Apache 
Calcite](https://calcite.apache.org) which implements the SQL standard. Queries 
specified in either interface have the same semantics and specify the same 
result regardless whether the input is a batch input (DataSet) or a stream 
input (DataStream).
-
-The Table API and the SQL interfaces are tightly integrated with each other as 
well as Flink's DataStream and DataSet APIs. You can easily switch between all 
APIs and libraries which build upon the APIs. For instance, you can extract 
patterns from a DataStream using the [CEP library]({{ site.baseurl 
}}/dev/libs/cep.html) and later use the Table API to analyze the patterns, or 
you might scan, filter, and aggregate a batch table using a SQL query before 
running a [Gelly graph algorithm]({{ site.baseurl }}/dev/libs/gelly) on the 
preprocessed data.
-
-**Please note that the Table API and SQL are not yet feature complete and are 
being actively developed. Not all operations are supported by every combination 
of \[Table API, SQL\] and \[stream, batch\] input.**
-
-Dependency Structure
---------------------
-
-Starting from Flink 1.9, Flink provides two different planner implementations 
for evaluating Table & SQL API programs: the Blink planner and the old planner 
that was available before Flink 1.9. Planners are responsible for
-translating relational operators into an executable, optimized Flink job. Both 
of the planners come with different optimization rules and runtime classes.
-They may also differ in the set of supported features.
-
-<span class="label label-danger">Attention</span> For production use cases, we 
recommend the blink planner that has become the default planner since 1.11.
-
-All Table API and SQL components are bundled in the `flink-table` or 
`flink-table-blink` Maven artifacts.
-
-The following dependencies are relevant for most projects:
-
-* `flink-table-common`: A common module for extending the table ecosystem by 
custom functions, formats, etc.
-* `flink-table-api-java`: The Table & SQL API for pure table programs using 
the Java programming language (in early development stage, not recommended!).
-* `flink-table-api-scala`: The Table & SQL API for pure table programs using 
the Scala programming language (in early development stage, not recommended!).
-* `flink-table-api-java-bridge`: The Table & SQL API with DataStream/DataSet 
API support using the Java programming language.
-* `flink-table-api-scala-bridge`: The Table & SQL API with DataStream/DataSet 
API support using the Scala programming language.
-* `flink-table-planner`: The table program planner and runtime. This was the 
only planner of Flink before the 1.9 release. It's no longer recommended since 
Flink 1.11.
-* `flink-table-planner-blink`: The new Blink planner, which has become the 
default one since Flink 1.11.
-* `flink-table-runtime-blink`: The new Blink runtime.
-* `flink-table-uber`: Packages the API modules above plus the old planner into 
a distribution for most Table & SQL API use cases. The uber JAR file 
`flink-table-*.jar` is located in the `/lib` directory of a Flink release by 
default.
-* `flink-table-uber-blink`: Packages the API modules above plus the Blink 
specific modules into a distribution for most Table & SQL API use cases. The 
uber JAR file `flink-table-blink-*.jar` is located in the `/lib` directory of a 
Flink release by default.
-
-See the [common API](common.html) page for more information about how to 
switch between the old and new Blink planner in table programs.
+Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch
+processing. The Table API is a language-integrated query API for Java, Scala, 
and Python that
+allows the composition of queries from relational operators such as selection, 
filter, and join in
+a very intuitive way. Flink's SQL support is based on [Apache 
Calcite](https://calcite.apache.org)
+which implements the SQL standard. Queries specified in either interface have 
the same semantics
+and specify the same result regardless of whether the input is continuous 
(streaming) or bounded (batch).
+
+The Table API and SQL interfaces integrate seamlessly with each other and 
Flink's DataStream API. 
+You can easily switch between all APIs and libraries which build upon them.
+For instance, you can extract patterns from a Table using [Match Recognize]({% 
link dev/table/streaming/match_recognize.md %})
+and later use the DataStream API to build alerting based on the matched 
patterns.
+
+Table Planners
+--------------
+
+Table planners are responsible for translating relational operators into an 
executable, optimized Flink job.
+Flink supports two different planner implementations; the modern Blink planner 
and the legacy planner.
+For production use cases, we recommend the blink planner which has been the 
default planner since 1.11.
+See the [common API]({% link dev/table/common.md %}) page for more information 
on how to switch between the two planners.
 
 ### Table Program Dependencies
 
-Depending on the target programming language, you need to add the Java or 
Scala API to a project in order to use the Table API & SQL for defining 
pipelines:
+Depending on the target programming language, you need to add the Java or 
Scala API to a project

Review comment:
       mention Python here as well?

##########
File path: docs/dev/table/index.md
##########
@@ -25,93 +25,106 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch processing. The Table API is a language-integrated 
query API for Scala and Java that allows the composition of queries from 
relational operators such as selection, filter, and join in a very intuitive 
way. Flink's SQL support is based on [Apache 
Calcite](https://calcite.apache.org) which implements the SQL standard. Queries 
specified in either interface have the same semantics and specify the same 
result regardless whether the input is a batch input (DataSet) or a stream 
input (DataStream).
-
-The Table API and the SQL interfaces are tightly integrated with each other as 
well as Flink's DataStream and DataSet APIs. You can easily switch between all 
APIs and libraries which build upon the APIs. For instance, you can extract 
patterns from a DataStream using the [CEP library]({{ site.baseurl 
}}/dev/libs/cep.html) and later use the Table API to analyze the patterns, or 
you might scan, filter, and aggregate a batch table using a SQL query before 
running a [Gelly graph algorithm]({{ site.baseurl }}/dev/libs/gelly) on the 
preprocessed data.
-
-**Please note that the Table API and SQL are not yet feature complete and are 
being actively developed. Not all operations are supported by every combination 
of \[Table API, SQL\] and \[stream, batch\] input.**
-
-Dependency Structure
---------------------
-
-Starting from Flink 1.9, Flink provides two different planner implementations 
for evaluating Table & SQL API programs: the Blink planner and the old planner 
that was available before Flink 1.9. Planners are responsible for
-translating relational operators into an executable, optimized Flink job. Both 
of the planners come with different optimization rules and runtime classes.
-They may also differ in the set of supported features.
-
-<span class="label label-danger">Attention</span> For production use cases, we 
recommend the blink planner that has become the default planner since 1.11.
-
-All Table API and SQL components are bundled in the `flink-table` or 
`flink-table-blink` Maven artifacts.
-
-The following dependencies are relevant for most projects:
-
-* `flink-table-common`: A common module for extending the table ecosystem by 
custom functions, formats, etc.
-* `flink-table-api-java`: The Table & SQL API for pure table programs using 
the Java programming language (in early development stage, not recommended!).
-* `flink-table-api-scala`: The Table & SQL API for pure table programs using 
the Scala programming language (in early development stage, not recommended!).
-* `flink-table-api-java-bridge`: The Table & SQL API with DataStream/DataSet 
API support using the Java programming language.
-* `flink-table-api-scala-bridge`: The Table & SQL API with DataStream/DataSet 
API support using the Scala programming language.
-* `flink-table-planner`: The table program planner and runtime. This was the 
only planner of Flink before the 1.9 release. It's no longer recommended since 
Flink 1.11.
-* `flink-table-planner-blink`: The new Blink planner, which has become the 
default one since Flink 1.11.
-* `flink-table-runtime-blink`: The new Blink runtime.
-* `flink-table-uber`: Packages the API modules above plus the old planner into 
a distribution for most Table & SQL API use cases. The uber JAR file 
`flink-table-*.jar` is located in the `/lib` directory of a Flink release by 
default.
-* `flink-table-uber-blink`: Packages the API modules above plus the Blink 
specific modules into a distribution for most Table & SQL API use cases. The 
uber JAR file `flink-table-blink-*.jar` is located in the `/lib` directory of a 
Flink release by default.
-
-See the [common API](common.html) page for more information about how to 
switch between the old and new Blink planner in table programs.
+Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch
+processing. The Table API is a language-integrated query API for Java, Scala, 
and Python that
+allows the composition of queries from relational operators such as selection, 
filter, and join in
+a very intuitive way. Flink's SQL support is based on [Apache 
Calcite](https://calcite.apache.org)
+which implements the SQL standard. Queries specified in either interface have 
the same semantics
+and specify the same result regardless of whether the input is continuous 
(streaming) or bounded (batch).
+
+The Table API and SQL interfaces integrate seamlessly with each other and 
Flink's DataStream API. 
+You can easily switch between all APIs and libraries which build upon them.
+For instance, you can extract patterns from a Table using [Match Recognize]({% 
link dev/table/streaming/match_recognize.md %})
+and later use the DataStream API to build alerting based on the matched 
patterns.
+
+Table Planners
+--------------
+
+Table planners are responsible for translating relational operators into an 
executable, optimized Flink job.
+Flink supports two different planner implementations; the modern Blink planner 
and the legacy planner.
+For production use cases, we recommend the blink planner which has been the 
default planner since 1.11.
+See the [common API]({% link dev/table/common.md %}) page for more information 
on how to switch between the two planners.
 
 ### Table Program Dependencies
 
-Depending on the target programming language, you need to add the Java or 
Scala API to a project in order to use the Table API & SQL for defining 
pipelines:
+Depending on the target programming language, you need to add the Java or 
Scala API to a project
+in order to use the Table API & SQL for defining pipelines.
 
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
 {% highlight xml %}
-<!-- Either... -->
 <dependency>
   <groupId>org.apache.flink</groupId>
   <artifactId>flink-table-api-java-bridge{{ site.scala_version_suffix 
}}</artifactId>
   <version>{{site.version}}</version>
   <scope>provided</scope>
 </dependency>
-<!-- or... -->
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight xml %}
 <dependency>
   <groupId>org.apache.flink</groupId>
   <artifactId>flink-table-api-scala-bridge{{ site.scala_version_suffix 
}}</artifactId>
   <version>{{site.version}}</version>
   <scope>provided</scope>
 </dependency>
 {% endhighlight %}
+</div>
+<div data-lang="python">
+{% highlight bash %}
+{% if site.is_stable %}
+$ python -m pip install apache-flink {{ site.version }}
+{% else %}
+$ python -m pip install apache-flink
+{% endif %}
+{% endhighlight %}
+</div>
+</div>
 
-Additionally, if you want to run the Table API & SQL programs locally within 
your IDE, you must add one of the
-following set of modules, depending which planner you want to use:
+Additionally, if you want to run the Table API & SQL programs locally within 
your IDE, you must add the
+following set of modules, depending which planner you want to use.
 
+<div class="codetabs" markdown="1">
+<div data-lang="Blink Planner" markdown="1">
 {% highlight xml %}
-<!-- Either... (for the old planner that was available before Flink 1.9) -->
 <dependency>
   <groupId>org.apache.flink</groupId>
-  <artifactId>flink-table-planner{{ site.scala_version_suffix }}</artifactId>
+  <artifactId>flink-table-planner-blink{{ site.scala_version_suffix 
}}</artifactId>
   <version>{{site.version}}</version>
   <scope>provided</scope>
 </dependency>
-<!-- or.. (for the new Blink planner) -->
 <dependency>
   <groupId>org.apache.flink</groupId>
-  <artifactId>flink-table-planner-blink{{ site.scala_version_suffix 
}}</artifactId>
+  <artifactId>flink-streaming-scala{{ site.scala_version_suffix }}</artifactId>
   <version>{{site.version}}</version>
   <scope>provided</scope>
 </dependency>
 {% endhighlight %}
-
-Internally, parts of the table ecosystem are implemented in Scala. Therefore, 
please make sure to add the following dependency for both batch and streaming 
applications:
-
+</div>
+<div data-lang="Legacy Planner" markdown="1">
 {% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-table-planner{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version}}</version>
+  <scope>provided</scope>
+</dependency>
 <dependency>
   <groupId>org.apache.flink</groupId>
   <artifactId>flink-streaming-scala{{ site.scala_version_suffix }}</artifactId>
   <version>{{site.version}}</version>
   <scope>provided</scope>
 </dependency>
 {% endhighlight %}
+</div>
+</div>
 
 ### Extension Dependencies
 
-If you want to implement a [custom format]({{ site.baseurl 
}}/dev/table/sourceSinks.html#define-a-tablefactory) for interacting with Kafka 
or a set of [user-defined functions]({{ site.baseurl 
}}/dev/table/functions/systemFunctions.html), the following dependency is 
sufficient and can be used for JAR files for the SQL Client:
+If you want to implement a [custom format]({% link dev/table/sourceSinks.md 
%}#define-a-tablefactory) 
+for (de)serializing rows or a set of [user-defined functions]({% link 
dev/table/functions/systemFunctions.md %}),

Review comment:
       ```suggestion
   for (de)serializing rows or a set of [user-defined functions]({% link 
dev/table/functions/udfs.md %}),
   ```

##########
File path: docs/dev/table/index.md
##########
@@ -25,93 +25,106 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch processing. The Table API is a language-integrated 
query API for Scala and Java that allows the composition of queries from 
relational operators such as selection, filter, and join in a very intuitive 
way. Flink's SQL support is based on [Apache 
Calcite](https://calcite.apache.org) which implements the SQL standard. Queries 
specified in either interface have the same semantics and specify the same 
result regardless whether the input is a batch input (DataSet) or a stream 
input (DataStream).
-
-The Table API and the SQL interfaces are tightly integrated with each other as 
well as Flink's DataStream and DataSet APIs. You can easily switch between all 
APIs and libraries which build upon the APIs. For instance, you can extract 
patterns from a DataStream using the [CEP library]({{ site.baseurl 
}}/dev/libs/cep.html) and later use the Table API to analyze the patterns, or 
you might scan, filter, and aggregate a batch table using a SQL query before 
running a [Gelly graph algorithm]({{ site.baseurl }}/dev/libs/gelly) on the 
preprocessed data.
-
-**Please note that the Table API and SQL are not yet feature complete and are 
being actively developed. Not all operations are supported by every combination 
of \[Table API, SQL\] and \[stream, batch\] input.**
-
-Dependency Structure
---------------------
-
-Starting from Flink 1.9, Flink provides two different planner implementations 
for evaluating Table & SQL API programs: the Blink planner and the old planner 
that was available before Flink 1.9. Planners are responsible for
-translating relational operators into an executable, optimized Flink job. Both 
of the planners come with different optimization rules and runtime classes.
-They may also differ in the set of supported features.
-
-<span class="label label-danger">Attention</span> For production use cases, we 
recommend the blink planner that has become the default planner since 1.11.
-
-All Table API and SQL components are bundled in the `flink-table` or 
`flink-table-blink` Maven artifacts.
-
-The following dependencies are relevant for most projects:
-
-* `flink-table-common`: A common module for extending the table ecosystem by 
custom functions, formats, etc.
-* `flink-table-api-java`: The Table & SQL API for pure table programs using 
the Java programming language (in early development stage, not recommended!).
-* `flink-table-api-scala`: The Table & SQL API for pure table programs using 
the Scala programming language (in early development stage, not recommended!).
-* `flink-table-api-java-bridge`: The Table & SQL API with DataStream/DataSet 
API support using the Java programming language.
-* `flink-table-api-scala-bridge`: The Table & SQL API with DataStream/DataSet 
API support using the Scala programming language.
-* `flink-table-planner`: The table program planner and runtime. This was the 
only planner of Flink before the 1.9 release. It's no longer recommended since 
Flink 1.11.
-* `flink-table-planner-blink`: The new Blink planner, which has become the 
default one since Flink 1.11.
-* `flink-table-runtime-blink`: The new Blink runtime.
-* `flink-table-uber`: Packages the API modules above plus the old planner into 
a distribution for most Table & SQL API use cases. The uber JAR file 
`flink-table-*.jar` is located in the `/lib` directory of a Flink release by 
default.
-* `flink-table-uber-blink`: Packages the API modules above plus the Blink 
specific modules into a distribution for most Table & SQL API use cases. The 
uber JAR file `flink-table-blink-*.jar` is located in the `/lib` directory of a 
Flink release by default.
-
-See the [common API](common.html) page for more information about how to 
switch between the old and new Blink planner in table programs.
+Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch
+processing. The Table API is a language-integrated query API for Java, Scala, 
and Python that
+allows the composition of queries from relational operators such as selection, 
filter, and join in
+a very intuitive way. Flink's SQL support is based on [Apache 
Calcite](https://calcite.apache.org)
+which implements the SQL standard. Queries specified in either interface have 
the same semantics
+and specify the same result regardless of whether the input is continuous 
(streaming) or bounded (batch).
+
+The Table API and SQL interfaces integrate seamlessly with each other and 
Flink's DataStream API. 
+You can easily switch between all APIs and libraries which build upon them.
+For instance, you can extract patterns from a Table using [Match Recognize]({% 
link dev/table/streaming/match_recognize.md %})
+and later use the DataStream API to build alerting based on the matched 
patterns.
+
+Table Planners
+--------------
+
+Table planners are responsible for translating relational operators into an 
executable, optimized Flink job.
+Flink supports two different planner implementations; the modern Blink planner 
and the legacy planner.
+For production use cases, we recommend the blink planner which has been the 
default planner since 1.11.
+See the [common API]({% link dev/table/common.md %}) page for more information 
on how to switch between the two planners.
 
 ### Table Program Dependencies
 
-Depending on the target programming language, you need to add the Java or 
Scala API to a project in order to use the Table API & SQL for defining 
pipelines:
+Depending on the target programming language, you need to add the Java or 
Scala API to a project
+in order to use the Table API & SQL for defining pipelines.
 
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
 {% highlight xml %}
-<!-- Either... -->
 <dependency>
   <groupId>org.apache.flink</groupId>
   <artifactId>flink-table-api-java-bridge{{ site.scala_version_suffix 
}}</artifactId>
   <version>{{site.version}}</version>
   <scope>provided</scope>
 </dependency>
-<!-- or... -->
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight xml %}
 <dependency>
   <groupId>org.apache.flink</groupId>
   <artifactId>flink-table-api-scala-bridge{{ site.scala_version_suffix 
}}</artifactId>
   <version>{{site.version}}</version>
   <scope>provided</scope>
 </dependency>
 {% endhighlight %}
+</div>
+<div data-lang="python">
+{% highlight bash %}
+{% if site.is_stable %}
+$ python -m pip install apache-flink {{ site.version }}
+{% else %}
+$ python -m pip install apache-flink
+{% endif %}
+{% endhighlight %}
+</div>
+</div>
 
-Additionally, if you want to run the Table API & SQL programs locally within 
your IDE, you must add one of the
-following set of modules, depending which planner you want to use:
+Additionally, if you want to run the Table API & SQL programs locally within 
your IDE, you must add the
+following set of modules, depending which planner you want to use.
 
+<div class="codetabs" markdown="1">
+<div data-lang="Blink Planner" markdown="1">
 {% highlight xml %}
-<!-- Either... (for the old planner that was available before Flink 1.9) -->
 <dependency>
   <groupId>org.apache.flink</groupId>
-  <artifactId>flink-table-planner{{ site.scala_version_suffix }}</artifactId>
+  <artifactId>flink-table-planner-blink{{ site.scala_version_suffix 
}}</artifactId>
   <version>{{site.version}}</version>
   <scope>provided</scope>
 </dependency>
-<!-- or.. (for the new Blink planner) -->
 <dependency>
   <groupId>org.apache.flink</groupId>
-  <artifactId>flink-table-planner-blink{{ site.scala_version_suffix 
}}</artifactId>
+  <artifactId>flink-streaming-scala{{ site.scala_version_suffix }}</artifactId>
   <version>{{site.version}}</version>
   <scope>provided</scope>
 </dependency>
 {% endhighlight %}
-
-Internally, parts of the table ecosystem are implemented in Scala. Therefore, 
please make sure to add the following dependency for both batch and streaming 
applications:
-
+</div>
+<div data-lang="Legacy Planner" markdown="1">
 {% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-table-planner{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version}}</version>
+  <scope>provided</scope>
+</dependency>
 <dependency>
   <groupId>org.apache.flink</groupId>
   <artifactId>flink-streaming-scala{{ site.scala_version_suffix }}</artifactId>
   <version>{{site.version}}</version>
   <scope>provided</scope>
 </dependency>
 {% endhighlight %}
+</div>
+</div>
 
 ### Extension Dependencies
 
-If you want to implement a [custom format]({{ site.baseurl 
}}/dev/table/sourceSinks.html#define-a-tablefactory) for interacting with Kafka 
or a set of [user-defined functions]({{ site.baseurl 
}}/dev/table/functions/systemFunctions.html), the following dependency is 
sufficient and can be used for JAR files for the SQL Client:
+If you want to implement a [custom format]({% link dev/table/sourceSinks.md 
%}#define-a-tablefactory) 

Review comment:
       ```suggestion
   If you want to implement a [custom format or connector]({% link 
dev/table/sourceSinks.md %}) 
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to