[ https://issues.apache.org/jira/browse/HDFS-17680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18015524#comment-18015524 ]
ASF GitHub Bot commented on HDFS-17680: --------------------------------------- hadoop-yetus commented on PR #7884: URL: https://github.com/apache/hadoop/pull/7884#issuecomment-3212099121 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |:----:|----------:|--------:|:--------:|:-------:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | |||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | |||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 26m 59s | | trunk passed | | +1 :green_heart: | compile | 0m 46s | | trunk passed with JDK Ubuntu-11.0.27+6-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 43s | | trunk passed with JDK Private Build-1.8.0_452-8u452-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 0m 36s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 47s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.27+6-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 10s | | trunk passed with JDK Private Build-1.8.0_452-8u452-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 1m 42s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 49s | | branch has no errors when building and testing our client artifacts. | |||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Ubuntu-11.0.27+6-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Private Build-1.8.0_452-8u452-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 29s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7884/5/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 8 unchanged - 0 fixed = 14 total (was 8) | | +1 :green_heart: | mvnsite | 0m 36s | | the patch passed | | +1 :green_heart: | javadoc | 0m 36s | | the patch passed with JDK Ubuntu-11.0.27+6-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 1s | | the patch passed with JDK Private Build-1.8.0_452-8u452-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 1m 39s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 49s | | patch has no errors when building and testing our client artifacts. | |||| _ Other Tests _ | | -1 :x: | unit | 126m 58s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7884/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 213m 50s | | | | Reason | Tests | |-------:|:------| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.tools.TestDFSAdmin | | Subsystem | Report/Notes | |----------:|:-------------| | Docker | ClientAPI=1.51 ServerAPI=1.51 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7884/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/7884 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux c159e14ec61c 5.15.0-143-generic #153-Ubuntu SMP Fri Jun 13 19:10:45 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c6c547abec20efd991153dfd82a595fd77a70f9e | | Default Java | Private Build-1.8.0_452-8u452-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.27+6-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_452-8u452-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7884/5/testReport/ | | Max. process+thread count | 3948 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7884/5/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > HDFS ui in the datanodes doesn't redirect to https when dfs.http.policy is > HTTPS_ONLY > ------------------------------------------------------------------------------------- > > Key: HDFS-17680 > URL: https://issues.apache.org/jira/browse/HDFS-17680 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, ui > Affects Versions: 3.4.1 > Reporter: Luis Pigueiras > Priority: Minor > Labels: pull-request-available > > _(I'm not sure if I should put it in HDFS or in HADOOP, feel free to move it > if it's not the correct place)_ > We have noticed that with having a https_only configuration when accessing > the datanodes from the namenode UI, there is a wrong redirection when > clicking on the link of the datanodes. > If you visit in the hdfs UI of a namenode: > https://<node>:50070/ -> datanodes -> click on the datanode you get > redirected from https://<node>:9865 to http://<node>:9865. The 302 should > redirect to https and not to http. If you do a curl to the link that is > exposed on the website, you get the redirected to the wrong place. > {code} > curl -k https://testing2475891.example.org:9865 -vvv > ... > < HTTP/1.1 302 Found > < Location: http://testing2475891.example.org:9865/index.html > {code} > This issue is present in our 3.3.6 but it's also present in 3.4.1 because I > managed to reproduce it with the following steps: > - Download latest version (binary from: > [https://hadoop.apache.org/releases.html] -> 3.4.1) > - Uncompress the binaries: > {code:java} > tar -xvf hadoop-3.4.1.tar.gz > cd hadoop-3.4.1 > {code} > - Generate dummy certs for TLS and move them to {{etc/hadoop}} > {code:java} > keytool -genkeypair -alias hadoop -keyalg RSA -keystore hadoop.keystore > -storepass changeit -validity 365 > keytool -export -alias hadoop -keystore hadoop.keystore -file hadoop.cer > -storepass changeit > keytool -import -alias hadoop -file hadoop.cer -keystore hadoop.truststore > -storepass changeit -noprompt > cp hadoop.* etc/hadoop > {code} > - Add this to {{etc/hadoop/hadoop-env.sh}} > {code:java} > export JAVA_HOME=/usr/lib/jvm/java-11-openjdk > export HDFS_NAMENODE_USER=root > export HDFS_DATANODE_USER=root > export HDFS_SECONDARYNAMENODE_USER=root > {code} > - Create a etc/hadoop/ssl-server.xml with: > {code:java} > <configuration> > <property> > <name>ssl.server.truststore.location</name> > <value>/root/hadoop/hadoop-3.4.1/etc/hadoop/hadoop.truststore</value> > <description>Truststore to be used by NN and DN. Must be specified. > </description> > </property> > <property> > <name>ssl.server.truststore.password</name> > <value>changeit</value> > <description>Optional. Default value is "". > </description> > </property> > <property> > <name>ssl.server.truststore.type</name> > <value>jks</value> > <description>Optional. The keystore file format, default value is "jks". > </description> > </property> > <property> > <name>ssl.server.truststore.reload.interval</name> > <value>10000</value> > <description>Truststore reload check interval, in milliseconds. > Default value is 10000 (10 seconds). > </description> > </property> > <property> > <name>ssl.server.keystore.location</name> > <value>/root/hadoop/hadoop-3.4.1/etc/hadoop/hadoop.keystore</value> > <description>Keystore to be used by NN and DN. Must be specified. > </description> > </property> > <property> > <name>ssl.server.keystore.password</name> > <value>changeit</value> > <description>Must be specified. > </description> > </property> > <property> > <name>ssl.server.keystore.keypassword</name> > <value>changeit</value> > <description>Must be specified. > </description> > </property> > <property> > <name>ssl.server.keystore.type</name> > <value>jks</value> > <description>Optional. The keystore file format, default value is "jks". > </description> > </property> > <property> > <name>ssl.server.exclude.cipher.list</name> > <value>TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, > SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA, > SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, > SSL_RSA_WITH_RC4_128_MD5</value> > <description>Optional. The weak security cipher suites that you want > excluded > from SSL communication.</description> > </property> > </configuration> > {code} > - hdfs-site.xml: > {code:java} > <?xml version="1.0" encoding="UTF-8"?> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> > <!-- > Licensed under the Apache License, Version 2.0 (the "License"); > you may not use this file except in compliance with the License. > You may obtain a copy of the License at > http://www.apache.org/licenses/LICENSE-2.0 > Unless required by applicable law or agreed to in writing, software > distributed under the License is distributed on an "AS IS" BASIS, > WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > See the License for the specific language governing permissions and > limitations under the License. See accompanying LICENSE file. > --> > <!-- Put site-specific property overrides in this file. --> > <configuration> > <configuration> > <property> > <name>dfs.replication</name> > <value>1</value> > </property> > </configuration> > <property> > <name>dfs.http.policy</name> > <value>HTTPS_ONLY</value> > </property> > <property> > <name>dfs.https.enable</name> > <value>true</value> > </property> > <property> > <name>dfs.namenode.https-address</name> > <value>0.0.0.0:50070</value> > </property> > <property> > <name>dfs.https.server.keystore.resource</name> > <value>ssl-server.xml</value> > </property> > </configuration> > {code} > - core-site.xml: > {code:java} > <configuration> > <configuration> > <property> > <name>fs.defaultFS</name> > <value>hdfs://localhost:9000</value> > </property> > </configuration> > <property> > <name>hadoop.ssl.enabled</name> > <value>true</value> > </property> > <property> > <name>hadoop.ssl.keystores.factory.class</name> > <value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value> > </property> > <property> > <name>hadoop.ssl.server.keystore.resource</name> > <value>hadoop.keystore</value> > </property> > <property> > <name>hadoop.ssl.server.keystore.password</name> > <value>changeit</value> > </property> > <property> > <name>hadoop.ssl.server.truststore.resource</name> > <value>hadoop.truststore</value> > </property> > <property> > <name>hadoop.ssl.server.truststore.password</name> > <value>changeit</value> > </property> > </configuration> > {code} > - Now you can initialize: > {code:java} > bin/hdfs namenode -format > sbin/start-dfs.sh > {code} > - If you visit https://<node>:50070/ -> datanodes -> click on the datanode > you get redirected from https://<node>:9865 to http://<node>:9865 > {code} > curl -k https://testing2475891.example.org:9865 -vvv > ... > < HTTP/1.1 302 Found > < Location: http://testing2475891.example.org:9865/index.html > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org