[jira] [Reopened] (HDFS-7769) TestHDFSCLI create files in hdfs project root dir

2015-02-27 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze reopened HDFS-7769:
---

Reopen for reverting the patch.

> TestHDFSCLI create files in hdfs project root dir
> -
>
> Key: HDFS-7769
> URL: https://issues.apache.org/jira/browse/HDFS-7769
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: h7769_20150210.patch, h7769_20150210b.patch
>
>
> After running TestHDFSCLI, two files (data and .data.crc) remain in hdfs 
> project root dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk #2049

2015-02-27 Thread Apache Jenkins Server
See 

Changes:

[ozawa] YARN-3217. Remove httpclient dependency from 
hadoop-yarn-server-web-proxy. Contributed by Brahma Reddy Battula.

[aw] HADOOP-11637. bash location hard-coded in shell scripts (aw)

[cmccabe] HDFS-7819. Log WARN message for the blocks which are not in Block ID 
based layout (Rakesh R via Colin P. McCabe)

[cnauroth] HADOOP-9922. hadoop windows native build will fail in 32 bit 
machine. Contributed by Kiran Kumar M R.

[cnauroth] HDFS-7774. Unresolved symbols error while compiling HDFS on Windows 
7/32 bit. Contributed by Kiran Kumar M R.

[kasha] MAPREDUCE-6223. TestJobConf#testNegativeValueForTaskVmem failures. 
(Varun Saxena via kasha)

[aajisaka] MAPREDUCE-5612. Add javadoc for TaskCompletionEvent.Status. 
Contributed by Chris Palmer.

[shv] YARN-3255. RM, NM, JobHistoryServer, and WebAppProxyServer's main() 
should support generic options. Contributed by Konstantin Shvachko.

[ozawa] HADOOP-11569. Provide Merge API for MapFile to merge multiple similar 
MapFiles to one MapFile. Contributed by Vinayakumar B.

[szetszwo] Revert "HDFS-7769. TestHDFSCLI should not create files in hdfs 
project root dir."

[vinayakumarb] HDFS-6753. Initialize checkDisk when DirectoryScanner not able 
to get files list for scanning (Contributed by J.Andreina)

--
[...truncated 7464 lines...]
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.367 sec - in 
org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.044 sec - in 
org.apache.hadoop.hdfs.TestLocalDFS
Running org.apache.hadoop.hdfs.TestPeerCache
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.384 sec - in 
org.apache.hadoop.hdfs.TestPeerCache
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.382 sec - 
in org.apache.hadoop.hdfs.TestHFlush
Running org.apache.hadoop.hdfs.TestDFSFinalize
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.566 sec - in 
org.apache.hadoop.hdfs.TestDFSFinalize
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.787 sec - in 
org.apache.hadoop.hdfs.TestDatanodeRegistration
Running org.apache.hadoop.hdfs.TestRenameWhileOpen
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.145 sec - in 
org.apache.hadoop.hdfs.TestRenameWhileOpen
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.555 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.727 sec - in 
org.apache.hadoop.hdfs.TestDFSRollback
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.593 sec - in 
org.apache.hadoop.hdfs.TestReservedRawPaths
Running org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.023 sec - in 
org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs
Running org.apache.hadoop.hdfs.TestLeaseRecovery
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.719 sec - in 
org.apache.hadoop.hdfs.TestLeaseRecovery
Running org.apache.hadoop.hdfs.TestEncryptionZones
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.997 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZones
Running org.apache.hadoop.hdfs.TestSmallBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.349 sec - in 
org.apache.hadoop.hdfs.TestSmallBlock
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 179.273 sec - 
in org.apache.hadoop.hdfs.TestLeaseRecovery2
Running org.apache.hadoop.hdfs.TestDFSMkdirs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.212 sec - in 
org.apache.hadoop.hdfs.TestDFSMkdirs
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.708 sec - in 
org.apache.hadoop.hdfs.TestSetTimes
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.102 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.411 sec - in 
org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.213 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.139 sec

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #108

2015-02-27 Thread Apache Jenkins Server
See 

Changes:

[ozawa] YARN-3217. Remove httpclient dependency from 
hadoop-yarn-server-web-proxy. Contributed by Brahma Reddy Battula.

[aw] HADOOP-11637. bash location hard-coded in shell scripts (aw)

[cmccabe] HDFS-7819. Log WARN message for the blocks which are not in Block ID 
based layout (Rakesh R via Colin P. McCabe)

[cnauroth] HADOOP-9922. hadoop windows native build will fail in 32 bit 
machine. Contributed by Kiran Kumar M R.

[cnauroth] HDFS-7774. Unresolved symbols error while compiling HDFS on Windows 
7/32 bit. Contributed by Kiran Kumar M R.

[kasha] MAPREDUCE-6223. TestJobConf#testNegativeValueForTaskVmem failures. 
(Varun Saxena via kasha)

[aajisaka] MAPREDUCE-5612. Add javadoc for TaskCompletionEvent.Status. 
Contributed by Chris Palmer.

[shv] YARN-3255. RM, NM, JobHistoryServer, and WebAppProxyServer's main() 
should support generic options. Contributed by Konstantin Shvachko.

[ozawa] HADOOP-11569. Provide Merge API for MapFile to merge multiple similar 
MapFiles to one MapFile. Contributed by Vinayakumar B.

[szetszwo] Revert "HDFS-7769. TestHDFSCLI should not create files in hdfs 
project root dir."

[vinayakumarb] HDFS-6753. Initialize checkDisk when DirectoryScanner not able 
to get files list for scanning (Contributed by J.Andreina)

--
[...truncated 7587 lines...]
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.805 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.479 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 157.624 sec - 
in org.apache.hadoop.hdfs.TestDecommission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.532 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.461 sec - in 
org.apache.hadoop.hdfs.TestGetBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.426 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 sec - in 
org.apache.hadoop.hdfs.util.TestCyclicIteration
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.235 sec - in 
org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.599 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.279 sec - in 
org.apache.hadoop.hdfs.util.TestByteArrayManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.071 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.256 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option Ma

Hadoop-Hdfs-trunk-Java8 - Build # 108 - Still Failing

2015-02-27 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/108/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7780 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.12.1:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  1.756 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:50 h
[INFO] Finished at: 2015-02-27T14:24:58+00:00
[INFO] Final Memory: 51M/235M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-11637
Updating HADOOP-11569
Updating HDFS-6753
Updating YARN-3217
Updating HADOOP-9922
Updating MAPREDUCE-5612
Updating HDFS-7769
Updating HDFS-7774
Updating MAPREDUCE-6223
Updating HDFS-7819
Updating YARN-3255
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestart

Error Message:
test timed out after 6 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 6 milliseconds
at java.lang.Object.wait(Native Method)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.triggerBlockReportForTests(BPServiceActor.java:392)
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.triggerBlockReportForTests(BPOfferService.java:564)
at 
org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils.triggerBlockReport(DataNodeTestUtils.java:68)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.triggerBlockReports(MiniDFSCluster.java:2152)
at 
org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestart(TestFileTruncate.java:652)




Hadoop-Hdfs-trunk - Build # 2049 - Still Failing

2015-02-27 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2049/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7657 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.12.1:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  2.288 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:50 h
[INFO] Finished at: 2015-02-27T14:24:25+00:00
[INFO] Final Memory: 51M/614M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-11637
Updating HADOOP-11569
Updating HDFS-6753
Updating YARN-3217
Updating HADOOP-9922
Updating MAPREDUCE-5612
Updating HDFS-7769
Updating HDFS-7774
Updating MAPREDUCE-6223
Updating HDFS-7819
Updating YARN-3255
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
7 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestBlockReaderLocal.testBlockReaderLocalReadZeroBytesNoReadahead

Error Message:
java.util.zip.ZipException: invalid code lengths set

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid code lengths set
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at 
org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown 
Source)
at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown 
Source)
at 
org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2501)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2489)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2560)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2513)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2426)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1160)
at or

[jira] [Created] (HDFS-7857) Incomplete information in WARN message caused user confusion

2015-02-27 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-7857:
---

 Summary: Incomplete information in WARN message caused user 
confusion
 Key: HDFS-7857
 URL: https://issues.apache.org/jira/browse/HDFS-7857
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang


Lots of the following messages appeared in NN log:

{quote}
2014-12-10 12:18:15,728 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
failed for :39838:null (DIGEST-MD5: IO error acquiring password)
2014-12-10 12:18:15,728 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client  threw exception 
[org.apache.hadoop.ipc.StandbyException: Operation category READ is not 
supported in state standby]
..
SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for 
:39843:null (DIGEST-MD5: IO error acquiring password)
2014-12-10 12:18:15,790 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client  threw exception 
[org.apache.hadoop.ipc.StandbyException: Operation category READ is not 
supported in state standby]
{quote}

The real reason of failure is the second message about StandbyException,
However, the first message is confusing because it talks about "DIGEST-MD5: IO 
error acquiring password".

Filing this jira to modify the first message to have more comprehensive 
information that can be obtained from {{getCauseForInvalidToken(e)}}.

{code}
   try {
  saslResponse = processSaslMessage(saslMessage);
} catch (IOException e) {
  rpcMetrics.incrAuthenticationFailures();
  // attempting user could be null
  AUDITLOG.warn(AUTH_FAILED_FOR + this.toString() + ":"
  + attemptingUser + " (" + e.getLocalizedMessage() + ")");
  throw (IOException) getCauseForInvalidToken(e);
}
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6446) NFS: Different error messages for appending/writing data from read only mount

2015-02-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li resolved HDFS-6446.
--
Resolution: Duplicate

> NFS: Different error messages for appending/writing data from read only mount
> -
>
> Key: HDFS-6446
> URL: https://issues.apache.org/jira/browse/HDFS-6446
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Yesha Vora
>Assignee: Brandon Li
>
> steps:
> 1) set dfs.nfs.exports.allowed.hosts =  ro
> 2) Restart nfs server
> 3) Append data on file present on hdfs from read only mount point
> Append data
> {noformat}
> bash$ cat /tmp/tmp_10MB.txt >> /tmp/tmp_mnt/expected_data_stream
> cat: write error: Input/output error
> {noformat}
> 4) Write data from read only mount point
> Copy data
> {noformat}
> bash$ cp /tmp/tmp_10MB.txt /tmp/tmp_mnt/tmp/
> cp: cannot create regular file `/tmp/tmp_mnt/tmp/tmp_10MB.txt': Permission 
> denied
> {noformat}
> Both operations are treated differently. Copying data returns valid error 
> message: 'Permission denied' . Though append data does not return valid error 
> message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7858) Improve HA Namenode Failover detection on the client using Zookeeper

2015-02-27 Thread Arun Suresh (JIRA)
Arun Suresh created HDFS-7858:
-

 Summary: Improve HA Namenode Failover detection on the client 
using Zookeeper
 Key: HDFS-7858
 URL: https://issues.apache.org/jira/browse/HDFS-7858
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Arun Suresh
Assignee: Arun Suresh


In an HA deployment, Clients are configured with the hostnames of both the 
Active and Standby Namenodes.Clients will first try one of the NNs 
(non-deterministically) and if its a standby NN, then it will respond to the 
client to retry the request on the other Namenode.

If the client happens to talks to the Standby first, and the standby is 
undergoing some GC / is busy, then those clients might not get a response soon 
enough to try the other NN.

Proposed Approach to solve this :
1) Since Zookeeper is already used as the failover controller, the clients 
could talk to ZK and find out which is the active namenode before contacting it.
2) Long-lived DFSClients would have a ZK watch configured which fires when 
there is a failover so they do not have to query ZK everytime to find out the 
active NN
2) Clients can also cache the last active NN in the user's home directory 
(~/.lastNN) so that short-lived clients can try that Namenode first before 
querying ZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


DISCUSSION: Patch commit criteria.

2015-02-27 Thread Konstantin Shvachko
There were discussions on several jiras and threads recently about how RTC
actually works in Hadoop.
My opinion has always been that for a patch to be committed it needs an
approval  (+1) of at least one committer other than the author and no -1s.
The Bylaws seem to be stating just that:
"Consensus approval of active committers, but with a minimum of one +1."
See the full version under Actions / Code Change


Turned out people have different readings of that part of Bylaws, and
different opinions on how RTC should work in different cases. Some of the
questions that were raised include:
 - Should we clarify the Code Change decision making clause in Bylaws?
 - Should there be a relaxed criteria for "trivial" changes?
 - Can a patch be committed if approved only by a non committer?
 - Can a patch be committed based on self-review by a committer?
 - What is the point for a non-committer to review the patch?
Creating this thread to discuss these (and other that I sure missed) issues
and to combine multiple discussions into one.

My personal opinion we should just stick to the tradition. Good or bad, it
worked for this community so far.
I think most of the discrepancies arise from the fact that reviewers are
hard to find. May be this should be the focus of improvements rather than
the RTC rules.

Thanks,
--Konst


[jira] [Resolved] (HDFS-6445) NFS: Add a log message 'Permission denied' while writing data from read only mountpoint

2015-02-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li resolved HDFS-6445.
--
Resolution: Duplicate

> NFS: Add a log message 'Permission denied' while writing data from read only 
> mountpoint
> ---
>
> Key: HDFS-6445
> URL: https://issues.apache.org/jira/browse/HDFS-6445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Yesha Vora
>Assignee: Brandon Li
>
> Add a log message in NFS log file when a write operation is performed on read 
> only mount point
> steps:
> 1) set dfs.nfs.exports.allowed.hosts =  ro
> 2) Restart nfs server
> 3) Append data on file present on hdfs
> {noformat}
> bash: cat /tmp/tmp_10MB.txt >> /tmp/tmp_mnt/expected_data_stream
> cat: write error: Input/output error
> {noformat}
> The real reason for append operation failure is permission denied. It should 
> be printed in nfs logs. currently, nfs log prints below messages. 
> {noformat}
> 2014-05-22 21:50:56,068 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 7340032 
> length:1048576 stableHow:0 xid:1904385849
> 2014-05-22 21:50:56,076 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1921163065
> 2014-05-22 21:50:56,078 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 8388608 
> length:1048576 stableHow:0 xid:1921163065
> 2014-05-22 21:50:56,086 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1937940281
> 2014-05-22 21:50:56,087 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 9437184 
> length:1048576 stableHow:0 xid:1937940281
> 2014-05-22 21:50:56,091 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1954717497
> 2014-05-22 21:50:56,091 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 10485760 
> length:168 stableHow:0 xid:1954717497
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7859) Erasure Coding: Persist EC schemas in NameNode

2015-02-27 Thread Kai Zheng (JIRA)
Kai Zheng created HDFS-7859:
---

 Summary: Erasure Coding: Persist EC schemas in NameNode
 Key: HDFS-7859
 URL: https://issues.apache.org/jira/browse/HDFS-7859
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
persist EC schemas in NameNode centrally and reliably, so that EC zones can 
reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: DISCUSSION: Patch commit criteria.

2015-02-27 Thread Andrew Wang
I have the same interpretation as Konst on this. +1 from at least one
committer other than the author, no -1s.

I don't think there should be an exclusion for trivial patches, since the
definition of "trivial" is subjective. The exception here is CHANGES.txt,
which is something we really should get rid of.

Non-committers are still strongly encouraged to review patches even if
their +1 is not binding. Additional reviews improve code quality. Also,
when it comes to choosing new committers, one of the primary things I look
for is a history of quality code reviews.

Best,
Andrew

On Fri, Feb 27, 2015 at 1:04 PM, Konstantin Shvachko 
wrote:

> There were discussions on several jiras and threads recently about how RTC
> actually works in Hadoop.
> My opinion has always been that for a patch to be committed it needs an
> approval  (+1) of at least one committer other than the author and no -1s.
> The Bylaws seem to be stating just that:
> "Consensus approval of active committers, but with a minimum of one +1."
> See the full version under Actions / Code Change
> 
>
> Turned out people have different readings of that part of Bylaws, and
> different opinions on how RTC should work in different cases. Some of the
> questions that were raised include:
>  - Should we clarify the Code Change decision making clause in Bylaws?
>  - Should there be a relaxed criteria for "trivial" changes?
>  - Can a patch be committed if approved only by a non committer?
>  - Can a patch be committed based on self-review by a committer?
>  - What is the point for a non-committer to review the patch?
> Creating this thread to discuss these (and other that I sure missed) issues
> and to combine multiple discussions into one.
>
> My personal opinion we should just stick to the tradition. Good or bad, it
> worked for this community so far.
> I think most of the discrepancies arise from the fact that reviewers are
> hard to find. May be this should be the focus of improvements rather than
> the RTC rules.
>
> Thanks,
> --Konst
>


[jira] [Resolved] (HDFS-7769) TestHDFSCLI create files in hdfs project root dir

2015-02-27 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-7769.
---
   Resolution: Fixed
Fix Version/s: 2.7.0

> TestHDFSCLI create files in hdfs project root dir
> -
>
> Key: HDFS-7769
> URL: https://issues.apache.org/jira/browse/HDFS-7769
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Tsz Wo Nicholas Sze
>Priority: Trivial
> Fix For: 2.7.0
>
>
> After running TestHDFSCLI, two files (data and .data.crc) remain in hdfs 
> project root dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7860) Get HA NameNode information from config file

2015-02-27 Thread Thanh Do (JIRA)
Thanh Do created HDFS-7860:
--

 Summary: Get HA NameNode information from config file
 Key: HDFS-7860
 URL: https://issues.apache.org/jira/browse/HDFS-7860
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Thanh Do


In current code, client uses files under /tmp to determine NameNode HA 
information. We should follow a cleaner approach that gets this information 
from configuration file (similar to Java client)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7861) Revisit Windows socket API compatibility

2015-02-27 Thread Thanh Do (JIRA)
Thanh Do created HDFS-7861:
--

 Summary: Revisit Windows socket API compatibility 
 Key: HDFS-7861
 URL: https://issues.apache.org/jira/browse/HDFS-7861
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Thanh Do


Windows socket API is somewhat different from its POSIX counter part (as 
described here: http://tangentsoft.net/wskfaq/articles/bsd-compatibility.html). 
We should address the compatibility issue in this JIRA. For instance, in 
Windows, {{WSAStartup}} should be called before any other socket APIs for the 
APIs to work correctly. Moreover, as Winsock API does not return error code in 
{{errno}} variables, {{perror}} does not work as in Posix systems. We should 
use {{WSAGetLastErrorMessage}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7862) Revisit the use of long data type

2015-02-27 Thread Thanh Do (JIRA)
Thanh Do created HDFS-7862:
--

 Summary: Revisit the use of long data type
 Key: HDFS-7862
 URL: https://issues.apache.org/jira/browse/HDFS-7862
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Thanh Do


We should revisit the places where {{long}} data type is used. In posix, 
{{long}} takes 4 bytes in 32 bit architecture and 8 bytes in 64 bit. However, 
in Windows, {{long}} takes 4 bytes no matter what. Because of this, compilation 
in Windows could finish successfully, but some tests might fail. Additionally 
compilation in windows will generate many warnings such as "conversion from 
'uint64_t' to 'unsigned long', possible loss of data".

We should stick with using {{int64_t}} or {{uint64_t}} instead whenever we 
expect the variables are signed or unsigned 8-byte long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)