Build failed in Jenkins: Hadoop-Hdfs-trunk #1994

2015-01-04 Thread Apache Jenkins Server
See 

--
[...truncated 10867 lines...]
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-hdfs-bkjournal ---
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ 
hadoop-hdfs-bkjournal ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperEditLogStreams
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.954 sec - in 
org.apache.hadoop.contrib.bkjournal.TestBookKeeperEditLogStreams
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperConfiguration
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.305 sec - in 
org.apache.hadoop.contrib.bkjournal.TestBookKeeperConfiguration
Running org.apache.hadoop.contrib.bkjournal.TestCurrentInprogress
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.638 sec - in 
org.apache.hadoop.contrib.bkjournal.TestCurrentInprogress
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 94.288 sec - in 
org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.664 sec - in 
org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager
Running org.apache.hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.282 sec - in 
org.apache.hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperAsHASharedDir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.301 sec - in 
org.apache.hadoop.contrib.bkjournal.TestBookKeeperAsHASharedDir

Results :

Tests run: 35, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.3.1:jar (default-jar) @ hadoop-hdfs-bkjournal ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-bkjournal ---
[INFO] org already added, skipping
[INFO] org/apache already added, skipping
[INFO] org/apache/hadoop already added, skipping
[INFO] org/apache/hadoop/contrib already added, skipping
[INFO] org/apache/hadoop/contrib/bkjournal already added, skipping
[INFO] Building jar: 

[INFO] org already added, skipping
[INFO] org/apache already added, skipping
[INFO] org/apache/hadoop already added, skipping
[INFO] org/apache/hadoop/contrib already added, skipping
[INFO] org/apache/hadoop/contrib/bkjournal already added, skipping
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-bkjournal ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-bkjournal ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-bkjournal ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-bkjournal ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-dependency-plugin:2.8:copy (dist) @ hadoop-hdfs-bkjournal ---
[INFO] Configured Artifact: org.apache.bookkeeper:bookkeeper-server:?:jar
[INFO] Copying bookkeeper-server-4.2.3.jar to 

[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ 
hadoop-hdfs-bkjournal ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-bkjournal ---
[INFO] 
[INFO] There are 331 checkstyle errors.
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-bkjourna

Hadoop-Hdfs-trunk - Build # 1994 - Failure

2015-01-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1994/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11060 lines...]
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [  02:38 h]
[INFO] Apache Hadoop HttpFS .. FAILURE [03:59 min]
[INFO] Apache Hadoop HDFS BookKeeper Journal . SUCCESS [02:34 min]
[INFO] Apache Hadoop HDFS-NFS  SUCCESS [01:40 min]
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.045 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:47 h
[INFO] Finished at: 2015-01-04T14:21:00+00:00
[INFO] Final Memory: 74M/1410M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs-httpfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-httpfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #1993
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 24591293 bytes
Compression is 0.0%
Took 7.3 sec
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
REGRESSION:  
org.apache.hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem.testOperationDoAs[18]

Error Message:
Read timed out

Stack Trace:
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
at sun.security.ssl.InputRecord.read(InputRecord.java:480)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:927)
at 
sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1312)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1339)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1323)
at 
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:563)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at 
sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1091)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:567)
at 
org.apache.hadoop.hdfs.web.

Jenkins build is back to normal : Hadoop-Hdfs-trunk-Java8 #59

2015-01-04 Thread Apache Jenkins Server
See 



Hadoop-Hdfs-trunk-Java8 - Build # 60 - Failure

2015-01-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/60/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6781 lines...]
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  02:40 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  1.645 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:40 h
[INFO] Finished at: 2015-01-05T03:12:36+00:00
[INFO] Final Memory: 51M/231M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-2922
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
5 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals 
to persistent storage due to No journals available to flush. Unsynced 
transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:626)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1254)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:357)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1215)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1676)
 at 
org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:816)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1758)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1809)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
 at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:494)
 at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testH

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #60

2015-01-04 Thread Apache Jenkins Server
See 

Changes:

[ozawa] YARN-2922. ConcurrentModificationException in CapacityScheduler's 
LeafQueue. Contributed by Rohith Sharmaks.

--
[...truncated 6588 lines...]
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.151 sec - in 
org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.026 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.198 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 160.595 sec - 
in org.apache.hadoop.hdfs.TestDecommission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.806 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.546 sec - in 
org.apache.hadoop.hdfs.TestGetBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.447 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - in 
org.apache.hadoop.hdfs.util.TestCyclicIteration
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.21 sec - in 
org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.509 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.451 sec - in 
org.apache.hadoop.hdfs.util.TestByteArrayManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.187 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.253 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.19 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.217 sec - in 
org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.071 sec - in 
org.apach

Jenkins build is back to normal : Hadoop-Hdfs-trunk #1995

2015-01-04 Thread Apache Jenkins Server
See 



[jira] [Created] (HDFS-7582) Limit the number of default ACL entries to Half of maximum entries (16)

2015-01-04 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-7582:
---

 Summary: Limit the number of default ACL entries to Half of 
maximum entries (16)
 Key: HDFS-7582
 URL: https://issues.apache.org/jira/browse/HDFS-7582
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B


Current ACL limits are only on the total number of entries.

But there can be a situation where number of default entries for a directory 
will be more than half of the maximum entries, i.e. > 16.

In such case, under this parent directory only files can be created which will 
have ACLs inherited using parent's default entries.

But when directories are created, total number of entries will be more than the 
maximum allowed, because sub-directories copies both inherited ACLs as well as 
default entries. and hence directory creation fails.

So it would be better to restrict the default entries to 16.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7580) NN -> JN communication should use reusable authentication methods

2015-01-04 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HDFS-7580.
---
Resolution: Invalid

Looking at the JDK sources there's no way to programmatically configure the KDC 
timeouts, so resolving this as invalid as there's nothing we can really do at 
our end.

I'll just make a krb5.conf change.

> NN -> JN communication should use reusable authentication methods
> -
>
> Key: HDFS-7580
> URL: https://issues.apache.org/jira/browse/HDFS-7580
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node, namenode
>Affects Versions: 2.5.0
>Reporter: Harsh J
>
> It appears that NNs talk to JNs via general SaslRPC in secure mode, causing 
> all requests to be carried out with a kerberos authentication. This can cause 
> delays and occasionally NN failures if the KDC used does not respond in its 
> default timeout period (30s, whereas the QJM writes come with default of 20s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)