[jira] [Created] (HDFS-7683) Combine usages and percent stats in NameNode UI

2015-01-27 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-7683:
---

 Summary: Combine usages and percent stats in NameNode UI
 Key: HDFS-7683
 URL: https://issues.apache.org/jira/browse/HDFS-7683
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor


In NameNode UI, there are separate rows to display cluster usage, one is in 
bytes, another one is in percentage.
We can combine these two rows to just display percent usage in brackets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7684) The Host:Port Settings of dfs.namenode.secondary.http-address should be trimmed before use

2015-01-27 Thread Tianyin Xu (JIRA)
Tianyin Xu created HDFS-7684:


 Summary: The Host:Port Settings of 
dfs.namenode.secondary.http-address should be trimmed before use
 Key: HDFS-7684
 URL: https://issues.apache.org/jira/browse/HDFS-7684
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.1, 2.4.1
Reporter: Tianyin Xu


With the following setting,


dfs.namenode.secondary.http-address
myhostname:50090 


The secondary NameNode could not be started

$ hadoop-daemon.sh start secondarynamenode
starting secondarynamenode, logging to 
/home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out
/home/hadoop/hadoop-2.4.1/bin/hdfs
Exception in thread "main" java.lang.IllegalArgumentException: Does not contain 
a valid host:port authority: myhostname:50090
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152)
at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203)
at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214)
at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.(SecondaryNameNode.java:192)
at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651)


We were really confused and misled by the log message: we thought about the DNS 
problems (changed to IP address but no success) and the network problem (tried 
to test the connections with no success...)

It turned out to be that the setting is not trimmed and the additional space 
character in the end of the setting caused the problem... OMG!!!...

Searching on the Internet, we find we are really not alone.  So many users 
encountered similar trim problems! The following lists a few:
http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html
http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh
https://issues.apache.org/jira/browse/HDFS-2799
https://issues.apache.org/jira/browse/HBASE-6973




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6903) Crc32 checksum errors in Big-Endian Architecture

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-6903.
--
Resolution: Duplicate

> Crc32 checksum errors in Big-Endian Architecture
> 
>
> Key: HDFS-6903
> URL: https://issues.apache.org/jira/browse/HDFS-6903
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.4.1, 2.6.0
> Environment: PowerPC RHEL 7 & 6.5 ( ppc64 - Big-Endian )
>Reporter: Ayappan
>Priority: Blocker
>
> Native Crc32 checksum calculation is not handled in Big-Endian 
> Architecture.In this case, the platform is ppc64. Due to this several 
> testcases in HDFS module fails.
> Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
> Tests run: 3, Failures: 0, Errors: 2, Skipped: 1, Time elapsed: 13.274 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestAppendDifferentChecksum
> testAlgoSwitchRandomized(org.apache.hadoop.hdfs.TestAppendDifferentChecksum)  
> Time elapsed: 7.141 sec  <<< ERROR!
> java.io.IOException: p=/testAlgoSwitchRandomized, length=28691, i=12288
> at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native 
> Method)
> at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
> at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:202)
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:137)
> at 
> org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:682)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:738)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:795)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:836)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:644)
> at java.io.FilterInputStream.read(FilterInputStream.java:83)
> at 
> org.apache.hadoop.hdfs.AppendTestUtil.check(AppendTestUtil.java:129)
> at 
> org.apache.hadoop.hdfs.TestAppendDifferentChecksum.testAlgoSwitchRandomized(TestAppendDifferentChecksum.java:130)
> testSwitchAlgorithms(org.apache.hadoop.hdfs.TestAppendDifferentChecksum)  
> Time elapsed: 1.394 sec  <<< ERROR!
> java.io.IOException: p=/testSwitchAlgorithms, length=3000, i=0
> at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native 
> Method)
> at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
> at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:202)
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:137)
> at 
> org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:682)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:738)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:795)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:836)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:644)
> at java.io.FilterInputStream.read(FilterInputStream.java:83)
> at 
> org.apache.hadoop.hdfs.AppendTestUtil.check(AppendTestUtil.java:129)
> at 
> org.apache.hadoop.hdfs.TestAppendDifferentChecksum.testSwitchAlgorithms(TestAppendDifferentChecksum.java:94)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7685) Document function of dfs.namenode.heartbeat.recheck-interval on hdfs-site.xml

2015-01-27 Thread Frank Lanitz (JIRA)
Frank Lanitz created HDFS-7685:
--

 Summary: Document function of 
dfs.namenode.heartbeat.recheck-interval on hdfs-site.xml
 Key: HDFS-7685
 URL: https://issues.apache.org/jira/browse/HDFS-7685
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Frank Lanitz
Priority: Trivial


Please document the function of {{dfs.namenode.heartbeat.recheck-interval}} 
inside hdfs-site.xml as it can be used to configure time needed until a node is 
considered as dead. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #83

2015-01-27 Thread Apache Jenkins Server
See 

Changes:

[kihwal]  HDFS-7224. Allow reuse of NN connections via webhdfs. Contributed by 
Eric Payne

[jlowe] YARN-3088. LinuxContainerExecutor.deleteAsUser can throw NPE if native 
executor returns an error. Contributed by Eric Payne

[jlowe] MAPREDUCE-6141. History server leveldb recovery store. Contributed by 
Jason Lowe

[jlowe] HADOOP-11499. Check of executorThreadsStarted in 
ValueQueue#submitRefillTask() evades lock acquisition. Contributed by Ted Yu

[cmccabe] HADOOP-11466: move to 2.6.1

[stevel] HADOOP-6221 RPC Client operations cannot be interrupted (stevel)

[xgong] HADOOP-11509. Change parsing sequence in GenericOptionsParser to parse

[jianhe] YARN-3092. Created a common ResourceUsage class to track labeled 
resource usages in Capacity Scheduler. Contributed by Wangda Tan

[stevel] HDFS-49. MiniDFSCluster.stopDataNode will always shut down a node in 
the cluster if a matching name is not found. (stevel)

--
[...truncated 6681 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.21 sec - in 
org.apache.hadoop.hdfs.TestLeaseRenewer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.93 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppend4
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.225 sec - in 
org.apache.hadoop.hdfs.TestFileAppend4
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.146 sec - in 
org.apache.hadoop.hdfs.TestParallelRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClose
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.138 sec - in 
org.apache.hadoop.hdfs.TestClose
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.481 sec - in 
org.apache.hadoop.hdfs.TestDFSAddressConfig
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.615 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.273 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.752 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.454 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.181 sec - in 
org.apache.hadoop.hdfs.TestWriteRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.238 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.674 sec - in 
org.apache.hadoop.hdfs.TestBalancerBandwidth
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.061 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Java HotSpot(TM) 64-Bit 

Hadoop-Hdfs-trunk-Java8 - Build # 83 - Failure

2015-01-27 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/83/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6874 lines...]
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  02:46 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  1.651 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:46 h
[INFO] Finished at: 2015-01-27T14:20:49+00:00
[INFO] Final Memory: 53M/259M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-7224
Updating YARN-3088
Updating HADOOP-11466
Updating MAPREDUCE-6141
Updating HADOOP-11499
Updating YARN-3092
Updating HDFS-49
Updating HADOOP-6221
Updating HADOOP-11509
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
5 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals 
to persistent storage due to No journals available to flush. Unsynced 
transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:628)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1271)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:359)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1215)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1679)
 at 
org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:819)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1758)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1809)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
 at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:494)
 at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:427)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAcce

Jenkins build is back to normal : Hadoop-Hdfs-trunk #2018

2015-01-27 Thread Apache Jenkins Server
See 



[jira] [Created] (HDFS-7686) "Corrupt block reporting to namenode soon" feature is overwritten by HDFS-7430

2015-01-27 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-7686:


 Summary: "Corrupt block reporting to namenode soon" feature is 
overwritten  by HDFS-7430
 Key: HDFS-7686
 URL: https://issues.apache.org/jira/browse/HDFS-7686
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Rushabh S Shah


The feature implemented in HDFS-7548 is removed by HDFS-7430.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7687) Change fsck to support EC files

2015-01-27 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-7687:
-

 Summary: Change fsck to support EC files
 Key: HDFS-7687
 URL: https://issues.apache.org/jira/browse/HDFS-7687
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


We need to change fsck so that it can detect "under replicated" and corrupted 
EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7577) Add additional headers that includes need by Windows

2015-01-27 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HDFS-7577.

   Resolution: Fixed
Fix Version/s: HDFS-6994

+1.  Committed, thanks!

> Add additional headers that includes need by Windows
> 
>
> Key: HDFS-7577
> URL: https://issues.apache.org/jira/browse/HDFS-7577
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Thanh Do
>Assignee: Thanh Do
> Fix For: HDFS-6994
>
> Attachments: HDFS-7577-branch-HDFS-6994-0.patch, 
> HDFS-7577-branch-HDFS-6994-1.patch, HDFS-7577-branch-HDFS-6994-2.patch
>
>
> This jira involves adding a list of (mostly dummy) headers that available in 
> POSIX systems, but not in Windows. One step towards making libhdfs3 built in 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7688) Client side api/config changes to support online encoding

2015-01-27 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-7688:
---

 Summary: Client side api/config changes to support online encoding
 Key: HDFS-7688
 URL: https://issues.apache.org/jira/browse/HDFS-7688
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: dfsclient
Reporter: Vinayakumar B
Assignee: Vinayakumar B


This Jira targets to handle Client side API changes to support directly erasure 
encoding with striped blocks from the client side



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7689) Add periodic checker to find the corrupted EC blocks/files

2015-01-27 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-7689:
---

 Summary: Add periodic checker to find the corrupted EC blocks/files
 Key: HDFS-7689
 URL: https://issues.apache.org/jira/browse/HDFS-7689
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B


Periodic checker similar to *ReplicationMonitor* to monitor the EC files/blocks 
for the corruption/missing and schedule for the recovery/correction



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7690) Avoid Block movement in Balancer and Mover for the erasure encoded blocks

2015-01-27 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-7690:
---

 Summary: Avoid Block movement in Balancer and Mover for the 
erasure encoded blocks
 Key: HDFS-7690
 URL: https://issues.apache.org/jira/browse/HDFS-7690
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B


As striped design says, its would be more fault tolerant if the striped blocks 
reside in different nodes of different racks. But Balancer and Mover may break 
this by moving the encoded blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7691) Handle hflush and hsync in the best optimal way possible during online Erasure encoding

2015-01-27 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-7691:
---

 Summary: Handle hflush and hsync in the best optimal way possible 
during online Erasure encoding
 Key: HDFS-7691
 URL: https://issues.apache.org/jira/browse/HDFS-7691
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B


as mentioned in design doc, hsync and hflush tends to make the online erasure 
encoding complex.
But these are critical features to ensure fault tolerance for some users.
These operations should be supported in best way possible during online erasure 
encoding to support the fault tolerance.

This Jira is a placeholder for the task. How to solve this will be discussed 
later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)