zentol commented on a change in pull request #11983: URL: https://github.com/apache/flink/pull/11983#discussion_r422899707
########## File path: flink-end-to-end-tests/pom.xml ########## @@ -153,6 +170,32 @@ under the License. <build> <plugins> + <plugin> + <artifactId>maven-resources-plugin</artifactId> + <!-- <version>3.1.0</version> --> Review comment: ? ########## File path: flink-dist/pom.xml ########## @@ -460,50 +460,6 @@ under the License. </dependencies> </profile> - <profile> - <!-- Copies that shaded Hadoop uber jar to the dist folder. --> - <id>include-hadoop</id> Review comment: still referenced in the azure files and `run-nightly-tests.sh` ########## File path: flink-end-to-end-tests/pom.xml ########## @@ -153,6 +170,32 @@ under the License. <build> <plugins> + <plugin> + <artifactId>maven-resources-plugin</artifactId> + <!-- <version>3.1.0</version> --> + <executions> + <execution> + <id>copy-resources</id> + <!-- here the phase you need --> Review comment: ```suggestion ``` ########## File path: flink-end-to-end-tests/pom.xml ########## @@ -255,6 +298,21 @@ under the License. </execution> </executions> </plugin> + <plugin> + <groupId>org.apache.maven.plugins</groupId> + <artifactId>maven-enforcer-plugin</artifactId> Review comment: do we still need this despite excluding all dependencies? ########## File path: flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNSessionCapacitySchedulerITCase.java ########## @@ -155,9 +155,7 @@ public static void tearDown() throws Exception { public void testStartYarnSessionClusterInQaTeamQueue() throws Exception { runTest(() -> runWithArgs(new String[]{ "-j", flinkUberjar.getAbsolutePath(), - "-t", flinkLibFolder.getAbsolutePath(), - "-t", flinkShadedHadoopDir.getAbsolutePath(), - "-jm", "768m", + "-t", flinkLibFolder.getAbsolutePath(), "-jm", "768m", Review comment: ```suggestion "-t", flinkLibFolder.getAbsolutePath(), "-jm", "768m", ``` ########## File path: docs/ops/deployment/hadoop.md ########## @@ -120,4 +88,44 @@ This way it should work both in local and cluster run where the provided depende To run or debug an application in IntelliJ Idea the provided dependencies can be included to the class path in the "Run|Edit Configurations" window. + +2) Putting the required jar files into /lib directory of the Flink distribution +Option 1) requires very little work, integrates nicely with existing Hadoop setups and should be the +preferred approach. +However, Hadoop has a large dependency footprint that increases the risk for dependency conflicts to occur. +If this happens, please refer to option 2). + +The following subsections explains these approaches in detail. + +## Using `flink-shaded-hadoop-2-uber` jar for resolving dependency conflicts (legacy) + +<div class="alert alert-info" markdown="span"> + <strong>Warning:</strong> Starting from Flink 1.11, using `flink-shaded-hadoop-2-uber` releases is not officially supported + by the Flink project anymore. Users are advised to provide Hadoop dependencies through `HADOOP_CLASSPATH` (see above). +</div> + + + +The Flink project used to release Hadoop distributions for specific versions, that relocate or exclude several dependencies Review comment: add a specific release as a date, instead of "used to" ########## File path: docs/ops/deployment/hadoop.md ########## @@ -120,4 +88,44 @@ This way it should work both in local and cluster run where the provided depende To run or debug an application in IntelliJ Idea the provided dependencies can be included to the class path in the "Run|Edit Configurations" window. + +2) Putting the required jar files into /lib directory of the Flink distribution Review comment: This paragraph isn't properly integrated with the current documentation. I guess it should be removed since the section below subsumes it? ########## File path: flink-connectors/flink-hbase/src/test/java/org/apache/flink/addons/hbase/util/HBaseTestingClusterAutoStarter.java ########## @@ -142,6 +144,11 @@ private static Configuration initialize(Configuration conf) { @BeforeClass public static void setUp() throws Exception { + // HBase 1.4 does not work with Hadoop 3 + // because it uses Guava 12.0.1, Hadoop 3 uses Guava 27.0-jre. + // There is not Guava version in between that works with both. Review comment: ```suggestion // There is no Guava version in between that works with both. ``` ########## File path: flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/java/org/apache/flink/tests/util/kafka/SQLClientKafkaITCase.java ########## @@ -93,9 +95,12 @@ private Path result; private Path sqlClientSessionConf; + private static final DownloadCache downloadCache = DownloadCache.get(); Review comment: add `@ClassRule` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org