[jira] [Created] (HDFS-7665) Add definition of truncate preconditions/postconditions to filesystem specification

2015-01-23 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-7665:


 Summary: Add definition of truncate preconditions/postconditions 
to filesystem specification
 Key: HDFS-7665
 URL: https://issues.apache.org/jira/browse/HDFS-7665
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 3.0.0
Reporter: Steve Loughran


With the addition of a major new feature to filesystems, the filesystem 
specification in hadoop-common/site is now out of sync. 

This means that
# there's no strict specification of what it should do
# you can't derive tests from that specification
# other people trying to implement the API will have to infer what to do from 
the HDFS source
# there's no way to decide whether or not the HDFS implementation does what it 
is intended.
# without matching tests against the raw local FS, differences between the HDFS 
impl and the Posix standard one won't be caught until it is potentially too 
late to fix.

The operation should be relatively easy to define (after a truncate, the files 
bytes [0...len-1] must equal the original bytes, length(file)==len, etc)

The truncate tests already written could then be pulled up into contract tests 
which any filesystem implementation can run against.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk #2014

2015-01-23 Thread Apache Jenkins Server
See 

Changes:

[ozawa] HADOOP-11500. InputStream is left unclosed in ApplicationClassLoader. 
Contributed by Ted Yu.

[arp] HDFS-7575. Upgrade should generate a unique storage ID for each volume. 
(Contributed by Arpit Agarwal)

[aw] HADOOP-11008. Remove duplicated description about proxy-user in site 
documents (Masatake Iwasaki via aw)

[arp] HDFS-7575. Fix CHANGES.txt

[cnauroth] HDFS-3519. Checkpoint upload may interfere with a concurrent 
saveNamespace. Contributed by Ming Ma.

[aajisaka] HADOOP-11493. Fix some typos in kms-acls.xml description. 
(Contributed by Charles Lamb)

[yliu] HDFS-7660. BlockReceiver#close() might be called multiple times, which 
causes the fsvolume reference being released incorrectly. (Lei Xu via yliu)

[ozawa] YARN-3082. Non thread safe access to systemCredentials in 
NodeHeartbeatResponse processing. Contributed by Anubhav Dhoot.

--
[...truncated 5440 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.299 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.47 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.152 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.867 sec - in 
org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.992 sec - in 
org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.613 sec - in 
org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNode
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.13 sec - in 
org.apache.hadoop.hdfs.qjournal.server.TestJournalNode
Running org.apache.hadoop.hdfs.qjournal.server.TestJournal
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.735 sec - in 
org.apache.hadoop.hdfs.qjournal.server.TestJournal
Running org.apache.hadoop.hdfs.TestModTime
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.95 sec - in 
org.apache.hadoop.hdfs.TestModTime
Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.206 sec - in 
org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Running org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.692 sec - in 
org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Running org.apache.hadoop.hdfs.security.TestDelegationToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.068 sec - in 
org.apache.hadoop.hdfs.security.TestDelegationToken
Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.3 sec - in 
org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.738 sec - in 
org.apache.hadoop.hdfs.TestLocalDFS
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.332 sec - in 
org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.161 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.353 sec - in 
org.apache.hadoop.hdfs.TestQuota
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.594 sec - in 
org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.967 sec - in 
org.apache.hadoop.hdfs.TestFileCreationEmpty
Running org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.17 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.975 sec - in 
org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Tests run: 20

Hadoop-Hdfs-trunk - Build # 2014 - Still Failing

2015-01-23 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2014/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5633 lines...]
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  01:01 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  2.399 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:01 h
[INFO] Finished at: 2015-01-23T12:35:27+00:00
[INFO] Final Memory: 71M/1100M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter8716015324946851387.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire6845744502488481649tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_0101351074586690658tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-11493
Updating HDFS-7575
Updating HADOOP-11500
Updating HDFS-3519
Updating YARN-3082
Updating HDFS-7660
Updating HADOOP-11008
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

Hadoop-Hdfs-trunk-Java8 - Build # 79 - Still Failing

2015-01-23 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/79/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7021 lines...]

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  02:42 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  1.719 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:42 h
[INFO] Finished at: 2015-01-23T14:17:17+00:00
[INFO] Final Memory: 51M/237M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-11493
Updating HDFS-7575
Updating HADOOP-11500
Updating HDFS-3519
Updating YARN-3082
Updating HDFS-7660
Updating HADOOP-11008
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
20 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName

Error Message:
java.util.zip.ZipException: invalid code lengths set

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid code lengths set
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at 
org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown 
Source)
at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown 
Source)
at 
org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2408)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2396)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2467)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2420)
at 
org.apache.hadoop.conf.Configuration.getProps(Conf

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #79

2015-01-23 Thread Apache Jenkins Server
See 

Changes:

[ozawa] HADOOP-11500. InputStream is left unclosed in ApplicationClassLoader. 
Contributed by Ted Yu.

[arp] HDFS-7575. Upgrade should generate a unique storage ID for each volume. 
(Contributed by Arpit Agarwal)

[aw] HADOOP-11008. Remove duplicated description about proxy-user in site 
documents (Masatake Iwasaki via aw)

[arp] HDFS-7575. Fix CHANGES.txt

[cnauroth] HDFS-3519. Checkpoint upload may interfere with a concurrent 
saveNamespace. Contributed by Ming Ma.

[aajisaka] HADOOP-11493. Fix some typos in kms-acls.xml description. 
(Contributed by Charles Lamb)

[yliu] HDFS-7660. BlockReceiver#close() might be called multiple times, which 
causes the fsvolume reference being released incorrectly. (Lei Xu via yliu)

[ozawa] YARN-3082. Non thread safe access to systemCredentials in 
NodeHeartbeatResponse processing. Contributed by Anubhav Dhoot.

--
[...truncated 6828 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.216 sec - in 
org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.069 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.171 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec - in 
org.apache.hadoop.hdfs.util.TestCyclicIteration
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.6 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.493 sec - in 
org.apache.hadoop.hdfs.TestDFSStartupVersions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.895 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.699 sec - in 
org.apache.hadoop.hdfs.TestReservedRawPaths
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.034 sec - in 
org.apache.hadoop.hdfs.TestRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.673 sec - in 
org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.095 sec - in 
org.apache.hadoop.hdfs.TestDFSRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.321 sec - in 
org.apache.hadoop.hdfs.TestMiniDFSCluster
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.029 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 181.646 sec - 
in org.apache.hadoop.hdfs.TestLeaseRecovery2
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 7, Fa

[jira] [Created] (HDFS-7666) Datanode blockId layout upgrade threads should be daemon thread

2015-01-23 Thread Rakesh R (JIRA)
Rakesh R created HDFS-7666:
--

 Summary: Datanode blockId layout upgrade threads should be daemon 
thread
 Key: HDFS-7666
 URL: https://issues.apache.org/jira/browse/HDFS-7666
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Rakesh R
Assignee: Rakesh R


This jira is to mark the layout upgrade thread as daemon thread.

{code}
 int numLinkWorkers = datanode.getConf().getInt(
 DFSConfigKeys.DFS_DATANODE_BLOCK_ID_LAYOUT_UPGRADE_THREADS_KEY,
 DFSConfigKeys.DFS_DATANODE_BLOCK_ID_LAYOUT_UPGRADE_THREADS);
ExecutorService linkWorkers = Executors.newFixedThreadPool(numLinkWorkers);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7421) Move processing of postponed over-replicated blocks to a background task

2015-01-23 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HDFS-7421.
--
Resolution: Duplicate

> Move processing of postponed over-replicated blocks to a background task
> 
>
> Key: HDFS-7421
> URL: https://issues.apache.org/jira/browse/HDFS-7421
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Affects Versions: 2.6.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>
> In an HA environment, we postpone sending block invalidates to DNs until all 
> DNs holding a given block have done at least one block report to the NN after 
> it became active. When that first block report after becoming active does 
> occur, we attempt to reprocess all postponed misreplicated blocks inline with 
> the block report RPC. In the case where there are many postponed 
> misreplicated blocks, this can cause block report RPCs to take an 
> inordinately long time to complete, sometimes on the order of minutes, which 
> has the potential to tie up RPC handlers, block incoming RPCs, etc. There's 
> no need to hurriedly process all postponed misreplicated blocks so that we 
> can quickly send invalidate commands back to DNs, so let's move this 
> processing outside of the RPC handler context and into a background thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7667) Various typos and improvements to HDFS Federation doc

2015-01-23 Thread Charles Lamb (JIRA)
Charles Lamb created HDFS-7667:
--

 Summary: Various typos and improvements to HDFS Federation doc
 Key: HDFS-7667
 URL: https://issues.apache.org/jira/browse/HDFS-7667
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.6.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Minor


Fix several incorrect commands, typos, grammatical errors, etc. in the HDFS 
Federation doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7668) Convert site documentation from apt to markdown

2015-01-23 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-7668:
--

 Summary: Convert site documentation from apt to markdown
 Key: HDFS-7668
 URL: https://issues.apache.org/jira/browse/HDFS-7668
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


HDFS analog to HADOOP-11495



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7669) HDFS Design Doc references commands that no longer exist.

2015-01-23 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-7669:
--

 Summary: HDFS Design Doc references commands that no longer exist.
 Key: HDFS-7669
 URL: https://issues.apache.org/jira/browse/HDFS-7669
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Allen Wittenauer


hadoop dfs should be hadoop fs
hadoop dfsadmin should be hdfs dfsadmin
hadoop dfs -rmr should be hadoop fs -rm -R



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7670) HDFS Quota guide has typos, incomplete command lines

2015-01-23 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-7670:
--

 Summary: HDFS Quota guide has typos, incomplete command lines
 Key: HDFS-7670
 URL: https://issues.apache.org/jira/browse/HDFS-7670
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Allen Wittenauer


HDFS quota guide uses "fs -count", etc as a valid command instead of "hadoop 
fs", etc.

There is also a typo in 'director'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7671) hdfs user guide should point to the common rack awareness doc

2015-01-23 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-7671:
--

 Summary: hdfs user guide should point to the common rack awareness 
doc
 Key: HDFS-7671
 URL: https://issues.apache.org/jira/browse/HDFS-7671
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer


HDFS user guide has a section on rack awareness that should really just be a 
pointer to the common doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7672) Handle write failure for EC blocks

2015-01-23 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-7672:
-

 Summary: Handle write failure for EC blocks
 Key: HDFS-7672
 URL: https://issues.apache.org/jira/browse/HDFS-7672
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


For (6, 3)-Reed-Solomon, a client writes to 6 data blocks and 3 parity blocks 
concurrently.  We need to handle datanode or network failures when writing a EC 
BlockGroup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7673) synthetic load generator docs give incorrect/incomplete commands

2015-01-23 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-7673:
--

 Summary: synthetic load generator docs give incorrect/incomplete 
commands
 Key: HDFS-7673
 URL: https://issues.apache.org/jira/browse/HDFS-7673
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Allen Wittenauer


The synthetic load generator guide gives this helpful command to start it:

{code}
java LoadGenerator [options]
{code}

This, of course, won't work.  What's the class path?  What jar is it in?  Is 
this really the command?  Isn't there a shell script wrapping this?

This atrocity against normal users is committed three more times after this one 
with equally incomplete commands for other parts of the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)