slinkydeveloper opened a new pull request #18134:
URL: https://github.com/apache/flink/pull/18134


   ## What is the purpose of the change
   
   The goal of this PR is to allow arbitrary Scala versions in the user API, 
hiding the Scala version of the planner from the Scala version of the API. This 
required some changes in the module organization, as described in the 
changelog. I also included a README to give an overview to table api developers 
(not for end users) of the new package organization 
https://github.com/apache/flink/commit/edf7350c7156e8bc96136d2f5b46114daa3ab344.
 
   
   This PR fixes the following issues:
   
   *  https://issues.apache.org/jira/browse/FLINK-25128
   * https://issues.apache.org/jira/browse/FLINK-25130
   * https://issues.apache.org/jira/browse/FLINK-25131
   
   I will open a followup PR to take care of the documentation.
   
   ## Changelog
   
   * `PlannerBase` is now using the class' classloader in order to load the 
`ParserFactory`. In case of planner_${scala.version}, this essentially makes no 
difference, while in case of planner-loader it will try to load first from its 
own classpath (which is `ComponentClassLoader`), and then from the parent 
classpath.
   * Removed `flink-table-uber` module and replaced with `flink-table-api-uber` 
to ship only Java API related packages in a single uber jar.
   * Now `flink-table-runtime` ships janino and code-splitter in its uber jar
   * `flink-table-planner` pom has been reworked, removing dependencies shipped 
by flink-runtime and flink-dist like jackson and commons-lang3, and now it 
generates 2 uber jars: one is pretty much the same as we ship now on master, 
and the other, named loader-bundle, is a uber-jar which includes also scala, in 
order to be used by flink-table-planner-loader
   * Introduce `flink-table-planner-loader`. This provides an implementation of 
the factories in `org.apache.flink.table.delegation` that uses an isolated 
classloader to load the implementations from `flink-table-planner` 
`loader-bundle` JAR. 
   * Rework SQL Client dependencies and remove scala suffix.
   * Rework Flink distribution. Now we ship as separate jars 
`flink-table-api-uber`, `flink-table-runtime`, `flink-table-planner-loader` and 
`flink-cep` in `/lib`, while we continue to ship 
`flink-table-planner_${scala.version}` in `/opt`, in order to allow users to 
replace it with `flink-table-planner-loader` just in case they need it. This is 
the fundamental change for the user, and will documented in the followup doc PR.
   * Use planner-loader wherever is possible in e2e tests and examples, and 
included a new test script that swaps the planner in order to check that both 
jars work fine.
   
   There are a couple of details worth to mention in order to understand all 
the moving parts of this PR:
   
   *  Some planner dependencies are relocated, others are not, and note that 
every relocation performed by runtime needs to be performed by planner as well. 
   * The `ComponentClassLoader` used in `PlannerModule` allows only certain 
classes to be loaded from its parent classloader. Those are all the classes 
starting with "org.apache.flink" (which includes relocated dependencies) plus 
the classes starting with one of the prefixes in `ownerClassPath`
   
   ## Verifying this change
   
   A new e2e test has been included to check that both planner-loader and 
planner_${scala.version} works fine. All the previous e2e tests are now running 
with planner-loader, whenever is possible.
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): yes
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: mp
   
   ## Documentation
   
     - Does this pull request introduce a new feature? yes
     - If yes, how is the feature documented? Will open a doc PR as followup
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to