Hello Gang, I'm trying to upgrade the Apache Hive project to use Hadoop 3.3.0 so that Hive can build/run with JDK 9/11. The current Hadoop dependency for Hive fails to run with JDK 9/11.
However, I ran into an issue: ProtobufRpcEngine2 It looks like Hadoop commons updated the PB RPC Engine from PB2 to PB3 and now the default is PB3. However, both cannot be loaded into the same JVM at the same time. Is there a reason for this? The Apache Hive project takes advantage of ProtobufRpcEngine v1 for some internal stuff. It still compiles and runs independently, but there exists a suite of integration tests where Hive uses the HDFS Mini Cluster for testing. This happens in the same JVM as the Hive code. Since HDFS is using V2 and Hive is using V1, one of them fails to start, depending on the order in which they are loaded. https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java#L70-L74 https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine2.java#L63-L67 Does this restriction need to be enforced in Hadoop commons? Is there a way to run the HDFS MiniCluster in its own class loader? Thanks.