slinkydeveloper commented on a change in pull request #18134:
URL: https://github.com/apache/flink/pull/18134#discussion_r773735973



##########
File path: flink-table/README.md
##########
@@ -0,0 +1,62 @@
+# Table API & SQL
+
+Apache Flink features two relational APIs - the Table API and SQL - for 
unified stream and batch processing. 
+The Table API is a language-integrated query API for Java, Scala, and Python 
that allows the composition of queries from relational operators such as 
selection, filter, and join in a very intuitive way. 
+
+For more details on how to use it, check out the 
[documentation](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/overview/).
+
+## Modules
+
+### Common
+
+* `flink-table-common`:
+  * Type system definition and UDF stack
+  * Internal data types definitions
+  * `Factory` definitions for catalogs, formats, connectors
+  * Other core APIs such as `Schema`
+  * Utilities to deal with type system, internal data types and printing
+  * When implementing a format, you usually need to depend only on this module
+
+### API
+
+* `flink-table-api-java`: 
+  * Java APIs for Table API and SQL
+  * Package `org.apache.flink.table.delegation`, which serves as entrypoint 
for all planner capabilities
+* `flink-table-api-scala`: Scala APIs for Table API and SQL
+* `flink-table-api-bridge-base`: Base classes for APIs to bridge between Table 
API and DataStream API
+* `flink-table-api-java-bridge`: 
+  * Java APIs to bridge between Table API and DataStream API
+  * When implementing a connector, you usually need to depend only on this 
module, in order to bridge your connector implementation developed with 
DataStream to Table API
+* `flink-table-api-scala-bridge`: Scala APIs to bridge between Table API and 
DataStream API
+* `flink-table-api-uber`: Uber JAR bundling `flink-table-common` and all the 
Java API modules, including 3rd party dependencies.
+
+### Runtime
+
+* `flink-table-code-splitter`: Tool to split generated Java code so that each 
method does not exceed the limit of 64KB.
+* `flink-table-runtime`:
+  * Operator implementations
+  * Built-in functions implementations
+  * Type system implementation, including readers/writers, converters and 
utilities
+  * Raw format
+  * The produced jar includes all the classes from this module and 
`flink-table-code-splitter`, including 3rd party dependencies
+
+### Parser and planner
+
+* `flink-sql-parser`: Default ANSI SQL parser implementation
+* `flink-sql-parser-hive`: Hive SQL dialect parser implementation
+* `flink-table-planner`:
+  * AST and Semantic tree
+  * SQL validator
+  * Planner and rules implementation
+  * Two jars are produced: one doesn't have any classifier and bundles all the 
classes from this module together with the two parsers, including 3rd party 
dependencies, while the other jar, classified as `loader-bundle`, extends the 
first jar including scala dependencies.
+* `flink-table-planner-loader`: Loader for `flink-table-planner` that loads 
the planner in a separate classpath, isolating the Scala version used to 
compile the planner.
+
+### SQL client
+
+* `flink-sql-client`: CLI tool to submit queries to a Flink cluster
+
+### Notes
+
+No module except `flink-table-planner` should depend on `flink-table-runtime` 
in production classpath, 

Review comment:
       For testing you need to depend on table-planner-loader, not on 
table-planner. I added a sentence about testing.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to