[jira] [Resolved] (HDFS-11245) HDFS ignores HADOOP_CONF_DIR

2016-12-15 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs resolved HDFS-11245.
---
Resolution: Invalid

HADOOP-9902 had a massive rewrite of scripts which improved the consistency and 
rationality of all the environment variables. These improvements have changed a 
few things which end up meaning users/admins need to alter their environment 
when switching from 2.x to 3.x.

The issue here was setting {{HADOOP_USER_CLASSPATH_FIRST}} as well as 
{{HADOOP_CLASSPATH}} when these are no longer needed. Also, getting the tools 
in the classpath is also done through {{HADOOP_OPTIONAL_TOOLS}} in 
{{hadoop-env.sh}} instead of {{HADOOP_CLASSPATH}} hacking.

Unsetting these variables seems to fix everything.

Possible follow on: more documentation distilling the changes required when 
moving from 2.x to 3.x. Especially in the context of a rolling upgrade.

> HDFS ignores HADOOP_CONF_DIR
> 
>
> Key: HDFS-11245
> URL: https://issues.apache.org/jira/browse/HDFS-11245
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
> Environment: Linux
>Reporter: Ewan Higgs
>
> It seems that HDFS on trunk is ignoring {{HADOOP_CONF_DIR}}. On {{branch-2}} 
> I could export {{HADOOP_CONF_DIR}} and use that to store my {{hdfs-site.xml}} 
> and {{log4j.properties}}. But on trunk it appears to ignore the environment 
> variable.
> Also, even if hdfs can find the {{log4j.properties}}, it doesn't seem 
> interested in opening and loading it.
> On Ubuntu 16.10:
> {code}
> $ source env.sh
> $ cat env.sh 
> #!/bin/bash
> export JAVA_HOME=/usr/lib/jvm/java-8-oracle
> export 
> HADOOP_HOME="$HOME"/src/hadoop/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT
> export HADOOP_LOG_DIR="$(pwd)/log"
> PATH="$HADOOP_HOME"/bin:$PATH
> export HADOOP_CLASSPATH=$(hadoop 
> classpath):"$HADOOP_HOME"/share/hadoop/tools/lib/*
> export HADOOP_USER_CLASSPATH_FIRST=true
> {code}
> Then I set the HADOOP_CONF_DIR:
> {code}
> $ export HADOOP_CONF_DIR="$(pwd)/conf/nn"
> $ ls $HADOOP_CONF_DIR
> hadoop-env.sh  hdfs-site.xml  log4j.properties
> {code}
> Now, we try to run a namenode:
> {code}
> $ hdfs namenode
> 2016-12-14 14:04:51,193 ERROR [main] namenode.NameNode: Failed to start 
> namenode.
> java.lang.IllegalArgumentException: Invalid URI for NameNode address (check 
> fs.defaultFS): file:/// has no authority.
> at 
> org.apache.hadoop.hdfs.DFSUtilClient.getNNAddress(DFSUtilClient.java:648)
> at 
> org.apache.hadoop.hdfs.DFSUtilClient.getNNAddressCheckLogical(DFSUtilClient.java:677)
> at 
> org.apache.hadoop.hdfs.DFSUtilClient.getNNAddress(DFSUtilClient.java:639)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:556)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:687)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1701)
> {code}
> This is weird. We have the {{fs.defaultFS}} set:
> {code}
> $ grep -n2 fs.defaultFS $HADOOP_CONF_DIR/hdfs-site.xml
> 3-
> 4-
> 5:fs.defaultFS
> 6-hdfs://localhost:60010
> 7-
> {code}
> So if isn't finding this config. Where is is looking and finding {{file:///}}?
> {code}
> $ strace -f -eopen,stat hdfs namenode 2>&1 | grep hdfs-site.xml
> [pid 16271] 
> stat("/home/ehigg90120/src/hadoop/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/share/hadoop/common/jdiff/hdfs-site.xml",
>  0x7f05eb6d21d0) = -1 ENOENT (No such file or directory)
> [pid 16271] 
> stat("/home/ehigg90120/src/hadoop/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/share/hadoop/common/lib/hdfs-site.xml",
>  0x7f05eb6d21d0) = -1 ENOENT (No such file or directory)
> [pid 16271] 
> stat("/home/ehigg90120/src/hadoop/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/share/hadoop/common/sources/hdfs-site.xml",
>  0x7f05eb6d21d0) = -1 ENOENT (No such file or directory)
> [pid 16271] 
> stat("/home/ehigg90120/src/hadoop/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/share/hadoop/common/templates/hdfs-site.xml",
>  0x7f05eb6d21d0) = -1 ENOENT (No such file or directory)
> [pid 16271] 
> stat("/home/ehigg90120/src/hadoop/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/share/hadoop/common/webapps/hdfs-site.xml",
>  0x7f05eb6d21d0) = -1 ENOENT (No such file or directory)
> [pid 16271] 
> stat("/home/ehigg90120/src/hadoop/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/share/hadoop/hdfs/jdiff/h

[jira] [Created] (HDFS-11249) Redundant toString() in DFSConfigKeys.java

2016-12-15 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HDFS-11249:


 Summary: Redundant toString() in DFSConfigKeys.java
 Key: HDFS-11249
 URL: https://issues.apache.org/jira/browse/HDFS-11249
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Akira Ajisaka
Priority: Trivial


{code:title=DFSConfigKeys.java}
  public static final String  DFS_WEBHDFS_AUTHENTICATION_FILTER_DEFAULT =
  "org.apache.hadoop.hdfs.web.AuthFilter".toString();
{code}
{{.toString()}} is not necessary and can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11250) Fix a typo in ReplicaUnderRecovery#setRecoveryID

2016-12-15 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-11250:


 Summary: Fix a typo in ReplicaUnderRecovery#setRecoveryID
 Key: HDFS-11250
 URL: https://issues.apache.org/jira/browse/HDFS-11250
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-alpha2
Reporter: Yiqun Lin
Assignee: Yiqun Lin
Priority: Trivial


Found a typo in {{ReplicaUnderRecovery#setRecoveryID}}. The relevant codes:
{code}
  public void setRecoveryID(long recoveryId) {
if (recoveryId > this.recoveryId) {
  this.recoveryId = recoveryId;
} else {
  throw new IllegalArgumentException("The new rcovery id: " + recoveryId
  + " must be greater than the current one: " + this.recoveryId);
}
  }
{code}
Here {{rcovery}} should be {{recovery}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11251) ConcurrentModificationException during DataNode#refreshVolumes

2016-12-15 Thread Jason Lowe (JIRA)
Jason Lowe created HDFS-11251:
-

 Summary: ConcurrentModificationException during 
DataNode#refreshVolumes
 Key: HDFS-11251
 URL: https://issues.apache.org/jira/browse/HDFS-11251
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-alpha2
Reporter: Jason Lowe


The testAddVolumesDuringWrite case failed with a ReconfigurationException which 
appears to have been caused by a ConcurrentModificationException.  Stacktrace 
details to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11252) TestFileTruncate#testTruncateWithDataNodesRestartImmediately can fail with BindException

2016-12-15 Thread Jason Lowe (JIRA)
Jason Lowe created HDFS-11252:
-

 Summary: 
TestFileTruncate#testTruncateWithDataNodesRestartImmediately can fail with 
BindException
 Key: HDFS-11252
 URL: https://issues.apache.org/jira/browse/HDFS-11252
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-alpha2
Reporter: Jason Lowe


testTruncateWithDataNodesRestartImmediately can fail with a BindException.  The 
setup for TestFileTruncate has been fixed in the past to solve a bind 
exception, but this is occurring after the minicluster comes up and the 
datanodes are being restarted.  Maybe there's a race condition there?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2016-12-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/187/

[Dec 14, 2016 7:18:58 PM] (arp) HDFS-10958. Add instrumentation hooks around 
Datanode disk IO.
[Dec 14, 2016 9:45:21 PM] (xyao) HADOOP-13890. Maintain HTTP/host as SPNEGO SPN 
support and fix
[Dec 14, 2016 10:33:23 PM] (xgong) YARN-5999. AMRMClientAsync will stop if any 
exceptions thrown on




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.token.delegation.TestZKDelegationTokenSecretManager 
   hadoop.hdfs.TestSetTimes 
   hadoop.hdfs.tools.TestDFSZKFailoverController 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.timelineservice.storage.common.TestRowKeys 
   hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters 
   hadoop.yarn.server.timelineservice.storage.common.TestSeparator 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/187/artifact/out/patch-compile-root.txt
  [164K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/187/artifact/out/patch-compile-root.txt
  [164K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/187/artifact/out/patch-compile-root.txt
  [164K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/187/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [124K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/187/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [200K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/187/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/187/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/187/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/187/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [72K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/187/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [316K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/187/artifact/out/patch-unit-h

[jira] [Created] (HDFS-11253) FileInputStream leak on failure path in BlockSender

2016-12-15 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-11253:


 Summary: FileInputStream leak on failure path in BlockSender
 Key: HDFS-11253
 URL: https://issues.apache.org/jira/browse/HDFS-11253
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The BlockSender constructor should close the blockIn and checksumIn streams 
here:

{code}
405:   blockIn = datanode.data.getBlockInputStream(block, offset); // seek 
to offset
406:   ris = new ReplicaInputStreams(
407:   blockIn, checksumIn, volumeRef, fileIoProvider);
408: } catch (IOException ioe) {
409:   IOUtils.closeStream(this);
410:   throw ioe;
411: }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11254) Failover may fail if loading edits takes too long

2016-12-15 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-11254:
--

 Summary: Failover may fail if loading edits takes too long
 Key: HDFS-11254
 URL: https://issues.apache.org/jira/browse/HDFS-11254
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Wei-Chiu Chuang
Priority: Critical
 Fix For: 2.9.0, 3.0.0-beta1


We found Standby NameNode crashed when it tried to transition from standby to 
active. This issue is similar to HDFS-11225 in nature. 

The root cause is all IPC threads were blocked, so ZKFC connection to NN timed 
out. In particular, when it crashed, we saw a few threads blocked on this 
thread:
{noformat}
Thread 188 (IPC Server handler 25 on 8022):
  State: RUNNABLE
  Blocked count: 278
  Waited count: 17419
  Stack:

org.apache.hadoop.hdfs.server.namenode.FSImage.updateCountForQuotaRecursively(FSImage.java:886)

org.apache.hadoop.hdfs.server.namenode.FSImage.updateCountForQuotaRecursively(FSImage.java:887)

org.apache.hadoop.hdfs.server.namenode.FSImage.updateCountForQuotaRecursively(FSImage.java:887)

org.apache.hadoop.hdfs.server.namenode.FSImage.updateCountForQuotaRecursively(FSImage.java:887)

org.apache.hadoop.hdfs.server.namenode.FSImage.updateCountForQuotaRecursively(FSImage.java:887)

org.apache.hadoop.hdfs.server.namenode.FSImage.updateCountForQuotaRecursively(FSImage.java:887)

org.apache.hadoop.hdfs.server.namenode.FSImage.updateCountForQuotaRecursively(FSImage.java:887)

org.apache.hadoop.hdfs.server.namenode.FSImage.updateCountForQuotaRecursively(FSImage.java:887)

org.apache.hadoop.hdfs.server.namenode.FSImage.updateCountForQuotaRecursively(FSImage.java:887)

org.apache.hadoop.hdfs.server.namenode.FSImage.updateCountForQuota(FSImage.java:875)
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:860)
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:827)

org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:232)

org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$1.run(EditLogTailer.java:188)

org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$1.run(EditLogTailer.java:182)
java.security.AccessController.doPrivileged(Native Method)
javax.security.auth.Subject.doAs(Subject.java:415)

org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:477)
org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:458)
{noformat}

This thread is part of {{FsImage#loadEdits}} when the NameNode failed over. 

We also found the following edit logs was rejected after journal node advanced 
epoch, which implies a failed transitionToActive request.
{noformat}
10.10.17.1:8485: IPC's epoch 11 is less than the last promised epoch 12
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:429)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.startLogSegment(Journal.java:513)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.startLogSegment(JournalNodeRpcServer.java:162)
at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.startLogSegment(QJournalProtocolServerSideTranslatorPB.java:198)
at 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25425)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)

at 
org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.startLogSegment(QuorumJournalManager.java:408)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalAndStream.startLogSegment(JournalSet.java:107)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet$3.apply(JournalSet.java:222)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)

[jira] [Created] (HDFS-11255) TestHttpFSWithKerberos failed

2016-12-15 Thread John Zhuge (JIRA)
John Zhuge created HDFS-11255:
-

 Summary: TestHttpFSWithKerberos failed
 Key: HDFS-11255
 URL: https://issues.apache.org/jira/browse/HDFS-11255
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: httpfs
Affects Versions: 3.0.0-alpha2
 Environment: {noformat}
CentOS Linux release 7.2.1511 (Core) 
Apache Maven 3.0.5 (Red Hat 3.0.5-17)
Maven home: /usr/share/maven
Java version: 1.8.0_111, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-2.b15.el7_3.x86_64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-327.36.3.el7.x86_64", arch: "amd64", family: 
"unix"
cee0c46 HDFS-11188. Change min supported DN and NN versions back to 2.x. 
Contributed by Andrew Wang.
{noformat}
Reporter: John Zhuge


{noformat}
$ mvn test -P\!shelltest -Dtest=TestHttpFSWithKerberos
...
Running org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos
Tests run: 6, Failures: 1, Errors: 5, Skipped: 0, Time elapsed: 7.356 sec <<< 
FAILURE! - in org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos
testDelegationTokenWithWebhdfsFileSystem(org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos)
  Time elapsed: 4.73 sec  <<< ERROR!
org.apache.hadoop.security.KerberosAuthException: Login failure for user: 
client from keytab /Users/tucu/tucu.keytab 
javax.security.auth.login.LoginException: Unable to obtain password from user

at 
com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:897)
at 
com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760)
at 
com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at 
javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at 
javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1092)
at 
org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos.testDelegationTokenWithinDoAs(TestHttpFSWithKerberos.java:239)
at 
org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos.testDelegationTokenWithWebhdfsFileSystem(TestHttpFSWithKerberos.java:270)

testInvalidadHttpFSAccess(org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos)
  Time elapsed: 1.581 sec  <<< FAILURE!
java.lang.AssertionError: expected:<503> but was:<401>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos.testInvalidadHttpFSAccess(TestHttpFSWithKerberos.java:144)
...
Failed tests: 
  TestHttpFSWithKerberos.testInvalidadHttpFSAccess:144 expected:<503> but 
was:<401>

Tests in error: 
  
TestHttpFSWithKerberos.testDelegationTokenWithWebhdfsFileSystem:270->testDelegationTokenWithinDoAs:239
 » KerberosAuth
  TestHttpFSWithKerberos.testValidHttpFSAccess:120 » Login Unable to obtain 
pass...
  
TestHttpFSWithKerberos.testDelegationTokenWithHttpFSFileSystem:262->testDelegationTokenWithinDoAs:239
 » KerberosAuth
  
TestHttpFSWithKerberos.testDelegationTokenWithHttpFSFileSystemProxyUser:279->testDelegationTokenWithinDoAs:239
 » KerberosAuth
  TestHttpFSWithKerberos.testDelegationTokenHttpFSAccess:155 » Login Unable to 
o...
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11255) TestHttpFSWithKerberos failed

2016-12-15 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HDFS-11255.
---
Resolution: Invalid

The test is excluded in pom.xml.

> TestHttpFSWithKerberos failed
> -
>
> Key: HDFS-11255
> URL: https://issues.apache.org/jira/browse/HDFS-11255
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha2
> Environment: {noformat}
> CentOS Linux release 7.2.1511 (Core) 
> Apache Maven 3.0.5 (Red Hat 3.0.5-17)
> Maven home: /usr/share/maven
> Java version: 1.8.0_111, vendor: Oracle Corporation
> Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-2.b15.el7_3.x86_64/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "3.10.0-327.36.3.el7.x86_64", arch: "amd64", 
> family: "unix"
> cee0c46 HDFS-11188. Change min supported DN and NN versions back to 2.x. 
> Contributed by Andrew Wang.
> {noformat}
>Reporter: John Zhuge
>
> {noformat}
> $ mvn test -P\!shelltest -Dtest=TestHttpFSWithKerberos
> ...
> Running org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos
> Tests run: 6, Failures: 1, Errors: 5, Skipped: 0, Time elapsed: 7.356 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos
> testDelegationTokenWithWebhdfsFileSystem(org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos)
>   Time elapsed: 4.73 sec  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException: Login failure for user: 
> client from keytab /Users/tucu/tucu.keytab 
> javax.security.auth.login.LoginException: Unable to obtain password from user
> at 
> com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:897)
> at 
> com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760)
> at 
> com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
> at 
> javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
> at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
> at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
> at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
> at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1092)
> at 
> org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos.testDelegationTokenWithinDoAs(TestHttpFSWithKerberos.java:239)
> at 
> org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos.testDelegationTokenWithWebhdfsFileSystem(TestHttpFSWithKerberos.java:270)
> testInvalidadHttpFSAccess(org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos)
>   Time elapsed: 1.581 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<503> but was:<401>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at org.junit.Assert.assertEquals(Assert.java:542)
> at 
> org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos.testInvalidadHttpFSAccess(TestHttpFSWithKerberos.java:144)
> ...
> Failed tests: 
>   TestHttpFSWithKerberos.testInvalidadHttpFSAccess:144 expected:<503> but 
> was:<401>
> Tests in error: 
>   
> TestHttpFSWithKerberos.testDelegationTokenWithWebhdfsFileSystem:270->testDelegationTokenWithinDoAs:239
>  » KerberosAuth
>   TestHttpFSWithKerberos.testValidHttpFSAccess:120 » Login Unable to obtain 
> pass...
>   
> TestHttpFSWithKerberos.testDelegationTokenWithHttpFSFileSystem:262->testDelegationTokenWithinDoAs:239
>  » KerberosAuth
>   
> TestHttpFSWithKerberos.testDelegationTokenWithHttpFSFileSystemProxyUser:279->testDelegationTokenWithinDoAs:239
>  » KerberosAuth
>   TestHttpFSWithKerberos.testDelegationTokenHttpFSAccess:155 » Login Unable 
> to o...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org