[jira] [Resolved] (HDFS-8604) Erasure Coding: update invalidateBlock(..) logic for striped block

2015-07-20 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su resolved HDFS-8604.
-
Resolution: Duplicate

already fixed in HDFS-8619

> Erasure Coding: update invalidateBlock(..) logic for striped block
> --
>
> Key: HDFS-8604
> URL: https://issues.apache.org/jira/browse/HDFS-8604
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
>
> {code}  
>   private boolean invalidateBlock(BlockToMarkCorrupt b, DatanodeInfo dn
>   ) throws IOException {
>   ..
> } else if (nr.liveReplicas() >= 1) { 
>   // If we have at least one copy on a live node, then we can delete it.
>   addToInvalidates(b.corrupted, dn); 
>   removeStoredBlock(b.stored, node);
> {code}
> We don't delete corrupted block if all we left is corrupted block. We give 
> user the decision. So user has chance to recover it manually.
> We should not compare liveReplicas() of Striped block with "1". The logic 
> need update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #251

2015-07-20 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-12235 hadoop-openstack junit & mockito dependencies should be 
"provided". (Ted Yu via stevel)

--
[...truncated 6933 lines...]

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSeekBug
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.712 sec - in 
org.apache.hadoop.hdfs.TestSeekBug
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.198 sec - in 
org.apache.hadoop.hdfs.TestParallelRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSMkdirs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.833 sec - in 
org.apache.hadoop.hdfs.TestDFSMkdirs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDeprecatedKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.448 sec - in 
org.apache.hadoop.hdfs.TestDeprecatedKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.763 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSConfigKeys
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.207 sec - in 
org.apache.hadoop.hdfs.TestDFSConfigKeys
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.524 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.257 sec - in 
org.apache.hadoop.hdfs.TestSetTimes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 142.614 sec - 
in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.583 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 150.629 sec - 
in org.apache.hadoop.hdfs.TestDFSClientRetries
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.69 sec - in 
org.apache.hadoop.hdfs.TestDFSAddressConfig
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFsLimits
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.19 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFsLimits
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSImage
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.926 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFSImage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestGenericJournalConf
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.668 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestGenericJournalConf
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestStoragePolicySummary
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.172 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestStoragePolicySummary
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogFileOutputStream
Tests run:

Hadoop-Hdfs-trunk-Java8 - Build # 251 - Still Failing

2015-07-20 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/251/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7126 lines...]
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:02 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:02 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.264 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:05 h
[INFO] Finished at: 2015-07-20T12:41:07+00:00
[INFO] Final Memory: 71M/1096M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx4096m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter7279853023266504911.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire1684896136783617139tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_196221447796638509702tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 4299734 bytes
Compression is 0.0%
Took 4.1 sec
Recording test results
Updating HADOOP-12235
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
6 tests failed.
REGRESSION:  
org.apache.hadoop.fs.TestSymlinkHdfsFileSystem.testCreateLinkUsingAbsPaths

Error Message:
java.util.zip.ZipException: invalid distance too far back

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid distance too 
far back
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2720)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1043)
at 
org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1093)
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2253)
at 
org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFil

Hadoop-Hdfs-trunk - Build # 2189 - Still Failing

2015-07-20 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2189/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7722 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:08 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.063 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:53 h
[INFO] Finished at: 2015-07-20T14:28:16+00:00
[INFO] Final Memory: 55M/864M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2181
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 3837043 bytes
Compression is 0.0%
Took 17 sec
Recording test results
Updating HADOOP-12235
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
12 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.web.TestWebHDFSXAttr.testCreateXAttr

Error Message:
Read timed out

Stack Trace:
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at 
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:351)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:90)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:630)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:480)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:509)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.aut

Build failed in Jenkins: Hadoop-Hdfs-trunk #2189

2015-07-20 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-12235 hadoop-openstack junit & mockito dependencies should be 
"provided". (Ted Yu via stevel)

--
[...truncated 7529 lines...]
Running org.apache.hadoop.hdfs.TestDFSFinalize
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.15 sec - in 
org.apache.hadoop.hdfs.TestDFSFinalize
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.94 sec - in 
org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestEncryptionZones
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.543 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZones
Running org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.415 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.681 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.374 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.215 sec - in 
org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.353 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.266 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.535 sec - in 
org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDataTransferProtocol
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.18 sec - in 
org.apache.hadoop.hdfs.TestDataTransferProtocol
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.466 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.132 sec - in 
org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.187 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.393 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 14, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 119.054 sec - 
in org.apache.hadoop.hdfs.TestDecommission
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.247 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.814 sec - in 
org.apache.hadoop.hdfs.TestGetBlocks
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.556 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.088 sec - in 
org.apache.hadoop.hdfs.util.TestCyclicIteration
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.265 sec - in 
org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.143 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.835 sec - in 
org.apache.hadoop.hdfs.util.TestByteArrayManager
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.084 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.223 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.321 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Running 

[jira] [Created] (HDFS-8799) Erasure Coding: add tests for process corrupt striped blocks

2015-07-20 Thread Walter Su (JIRA)
Walter Su created HDFS-8799:
---

 Summary: Erasure Coding: add tests for process corrupt striped 
blocks
 Key: HDFS-8799
 URL: https://issues.apache.org/jira/browse/HDFS-8799
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Walter Su
Assignee: Walter Su
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-3532) TestDatanodeBlockScanner.testBlockCorruptionRecoveryPolicy1 times out

2015-07-20 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HDFS-3532.
--
Resolution: Cannot Reproduce

This is an ancient/stale flaky test JIRA. Resolving.

> TestDatanodeBlockScanner.testBlockCorruptionRecoveryPolicy1 times out
> -
>
> Key: HDFS-3532
> URL: https://issues.apache.org/jira/browse/HDFS-3532
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>
> I've seen this test time out on recent trunk jenkins test patch runs even 
> though HDFS-3266 was put in a couple weeks ago.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-4001) TestSafeMode#testInitializeReplQueuesEarly may time out

2015-07-20 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HDFS-4001.
--
Resolution: Fixed

Haven't seen this fail in a very long time. Closing this out. Feel free to 
reopen if you disagree.

> TestSafeMode#testInitializeReplQueuesEarly may time out
> ---
>
> Key: HDFS-4001
> URL: https://issues.apache.org/jira/browse/HDFS-4001
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
> Attachments: timeout.txt.gz
>
>
> Saw this failure on a recent branch-2 jenkins run, has also been seen on 
> trunk.
> {noformat}
> java.util.concurrent.TimeoutException: Timed out waiting for condition
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:107)
>   at 
> org.apache.hadoop.hdfs.TestSafeMode.testInitializeReplQueuesEarly(TestSafeMode.java:191)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-3660) TestDatanodeBlockScanner#testBlockCorruptionRecoveryPolicy2 times out

2015-07-20 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HDFS-3660.
--
  Resolution: Cannot Reproduce
Target Version/s:   (was: )

This is an ancient/stale flaky test JIRA. Resolving.

> TestDatanodeBlockScanner#testBlockCorruptionRecoveryPolicy2 times out   
> 
>
> Key: HDFS-3660
> URL: https://issues.apache.org/jira/browse/HDFS-3660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Priority: Minor
>
> Saw this on a recent jenkins run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-3811) TestPersistBlocks#TestRestartDfsWithFlush appears to be flaky

2015-07-20 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HDFS-3811.
--
Resolution: Cannot Reproduce

I don't think I've seen this fail in a very long time. Going to resolve this. 
Please reopen if you disagree.

> TestPersistBlocks#TestRestartDfsWithFlush appears to be flaky
> -
>
> Key: HDFS-3811
> URL: https://issues.apache.org/jira/browse/HDFS-3811
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Andrew Wang
>Assignee: Todd Lipcon
> Attachments: stacktrace, testfail-editlog.log, testfail.log, 
> testpersistblocks.txt
>
>
> This test failed on a recent Jenkins build, but passes for me locally. Seems 
> flaky.
> See:
> https://builds.apache.org/job/PreCommit-HDFS-Build/3021//testReport/org.apache.hadoop.hdfs/TestPersistBlocks/TestRestartDfsWithFlush/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2433) TestFileAppend4 fails intermittently

2015-07-20 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HDFS-2433.
--
Resolution: Cannot Reproduce

I don't think I've seen this fail in a long, long time. Going to close this 
out. Please reopen if you disagree.

> TestFileAppend4 fails intermittently
> 
>
> Key: HDFS-2433
> URL: https://issues.apache.org/jira/browse/HDFS-2433
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode, test
>Affects Versions: 0.20.205.0, 1.0.0
>Reporter: Robert Joseph Evans
>Priority: Critical
> Attachments: failed.tar.bz2
>
>
> A Jenkins build we have running failed twice in a row with issues form 
> TestFileAppend4.testAppendSyncReplication1 in an attempt to reproduce the 
> error I ran TestFileAppend4 in a loop over night saving the results away.  
> (No clean was done in between test runs)
> When TestFileAppend4 is run in a loop the testAppendSyncReplication[012] 
> tests fail about 10% of the time (14 times out of 130 tries)  They all fail 
> with something like the following.  Often it is only one of the tests that 
> fail, but I have seen as many as two fail in one run.
> {noformat}
> Testcase: testAppendSyncReplication2 took 32.198 sec
> FAILED
> Should have 2 replicas for that block, not 1
> junit.framework.AssertionFailedError: Should have 2 replicas for that block, 
> not 1
> at 
> org.apache.hadoop.hdfs.TestFileAppend4.replicationTest(TestFileAppend4.java:477)
> at 
> org.apache.hadoop.hdfs.TestFileAppend4.testAppendSyncReplication2(TestFileAppend4.java:425)
> {noformat}
> I also saw several other tests that are a part of TestFileApped4 fail during 
> this experiment.  They may all be related to one another so I am filing them 
> in the same JIRA.  If it turns out that they are not related then they can be 
> split up later.
> testAppendSyncBlockPlusBbw failed 6 out of the 130 times or about 5% of the 
> time
> {noformat}
> Testcase: testAppendSyncBlockPlusBbw took 1.633 sec
> FAILED
> unexpected file size! received=0 , expected=1024
> junit.framework.AssertionFailedError: unexpected file size! received=0 , 
> expected=1024
> at 
> org.apache.hadoop.hdfs.TestFileAppend4.assertFileSize(TestFileAppend4.java:136)
> at 
> org.apache.hadoop.hdfs.TestFileAppend4.testAppendSyncBlockPlusBbw(TestFileAppend4.java:401)
> {noformat}
> testAppendSyncChecksum[012] failed 2 out of the 130 times or about 1.5% of 
> the time
> {noformat}
> Testcase: testAppendSyncChecksum1 took 32.385 sec
> FAILED
> Should have 1 replica for that block, not 2
> junit.framework.AssertionFailedError: Should have 1 replica for that block, 
> not 2
> at 
> org.apache.hadoop.hdfs.TestFileAppend4.checksumTest(TestFileAppend4.java:556)
> at 
> org.apache.hadoop.hdfs.TestFileAppend4.testAppendSyncChecksum1(TestFileAppend4.java:500)
> {noformat}
> I will attach logs for all of the failures.  Be aware that I did change some 
> of the logging messages in this test so I could better see when 
> testAppendSyncReplication started and ended.  Other then that the code is 
> stock 0.20.205 RC2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8764) Generate Hadoop RPC stubs from protobuf definitions

2015-07-20 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai resolved HDFS-8764.
--
   Resolution: Fixed
Fix Version/s: HDFS-8707

Committed to the HDFS-8707 branch. Thanks Jing and James for the reviews.

> Generate Hadoop RPC stubs from protobuf definitions
> ---
>
> Key: HDFS-8764
> URL: https://issues.apache.org/jira/browse/HDFS-8764
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: HDFS-8707
>
> Attachments: HDFS-8764.000.patch
>
>
> It would be nice to have the the RPC stubs generated from the protobuf 
> definitions which is similar to what the HADOOP-10388 has achieved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8788) Implement unit tests for remote block reader in libhdfspp

2015-07-20 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai resolved HDFS-8788.
--
   Resolution: Fixed
Fix Version/s: HDFS-8707

Committed to the HDFS-8707 branch. Thanks James for the reviews.

> Implement unit tests for remote block reader in libhdfspp
> -
>
> Key: HDFS-8788
> URL: https://issues.apache.org/jira/browse/HDFS-8788
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: HDFS-8707
>
> Attachments: HDFS-8788.000.patch
>
>
> This jira proposes to implement unit tests for the remote block reader in 
> gmock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8800) shutdown has bugs

2015-07-20 Thread John Smith (JIRA)
John Smith created HDFS-8800:


 Summary: shutdown has bugs
 Key: HDFS-8800
 URL: https://issues.apache.org/jira/browse/HDFS-8800
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: John Smith






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8616) Cherry pick HDFS-6495 for excess block leak

2015-07-20 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HDFS-8616.
-
Resolution: Done

I've backported HDFS-6945 to 2.7.2. Please reopen this issue if you disagree.

> Cherry pick HDFS-6495 for excess block leak
> ---
>
> Key: HDFS-8616
> URL: https://issues.apache.org/jira/browse/HDFS-8616
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Akira AJISAKA
>
> Busy clusters quickly leak tens or hundreds of thousands of excess blocks 
> which slow BR processing.  HDFS-6495 should be cherry picked into 2.7.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8761) Windows HDFS daemon - datanode.DirectoryScanner: Error compiling report (...) XXX is not a prefix of YYY

2015-07-20 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-8761.
-
Resolution: Not A Problem

Hello [~odelalleau].

I answered your question on Stack Overflow.  I'm pasting the answer here too.  
After using the techniques I described to configure a path with a drive spec, I 
expect you won't see these errors anymore.  In the future, the best forum for 
questions like this is the u...@hadoop.apache.org mailing list.

You can specify a drive spec in {{hadoop.tmp.dir}} in core-site.xml by 
prepending a '/' in front of the absolute path, and using '/' as the path 
separator instead of '\' for all path elements.  For example, if the desired 
absolute path is D:\tmp\hdp, then it would look like this:

{code}

hadoop.tmp.dir
/D:/tmp/hadoop

{code}

The reason this works is that the default values for many of the HDFS 
directories are configured to be file://${hadoop.tmp.dir}/suffix.  See the 
default definitions of {{dfs.namenode.name.dir}}, {{dfs.datanode.data.dir}} and 
{{dfs.namenode.checkpoint.dir}} here:

http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

Substituting the above value for {{hadoop.tmp.dir}} yields a valid {{file:}} 
URL with a drive spec and no authority, which satisfies the requirements for 
the HDFS configuration.  It's important to use '/' instead of '\', because a 
bare unencoded '\' character is not valid in URL syntax.

http://www.ietf.org/rfc/rfc1738.txt

If you prefer not to rely on this substitution behavior, then it's also valid 
to override all configuration properties that make use of {{hadoop.tmp.dir}} 
within your hdfs-site.xml file.  Each value must be a full {{file:}} URL.  For 
example:

{code}

dfs.namenode.name.dir
file:///D:/tmp/hadoop/dfs/name



dfs.datanode.data.dir
file:///D:/tmp/hadoop/dfs/data



dfs.namenode.checkpoint.dir
file:///D:/tmp/hadoop/dfs/namesecondary

{code}

You might find this more readable overall.

> Windows HDFS daemon - datanode.DirectoryScanner: Error compiling report (...) 
> XXX is not a prefix of YYY
> 
>
> Key: HDFS-8761
> URL: https://issues.apache.org/jira/browse/HDFS-8761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
> Environment: Windows 7, Java SDK 1.8.0_45
>Reporter: Olivier Delalleau
>Priority: Minor
>
> I'm periodically seeing errors like the one below output by the HDFS daemon 
> (started with start-dfs.cmd). This is with the default settings for data 
> location (=not specified in my hdfs-site.xml). I assume it may be fixable by 
> specifying a path with the drive letter in the config file, however I haven't 
> be able to do that (see 
> http://stackoverflow.com/questions/31353226/setting-hadoop-tmp-dir-on-windows-gives-error-uri-has-an-authority-component).
> 15/07/11 17:29:57 ERROR datanode.DirectoryScanner: Error compiling report
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> \tmp\hadoop-odelalleau\dfs\data is not a prefix of 
> D:\tmp\hadoop-odelalleau\dfs\data\current\BP-1474392971-10.128.22.110-1436634926842\current\finalized\subdir0\subdir0\blk_1073741825
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:566)
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:425)
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:406)
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:362)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Questions on Namespace/namenode

2015-07-20 Thread Shani Ranasinghe
Hi,

I am Shani. I am currently doing a research on how to create multiple
namespaces within the sasme namenode.

I was looking at the code, and found some terms that I would like to have a
better understanding on.


1) What is a Namesystem?
2)  If I could have guidance on where to look for namespace creation, it
would greatly help.

Any help is appreciated.

Regards,
Shani.