[jira] [Created] (HDFS-8586) Dead Datanode is allocated for write when client is from deadnode

2015-06-12 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-8586:
--

 Summary: Dead Datanode is allocated for write when client is  from 
deadnode
 Key: HDFS-8586
 URL: https://issues.apache.org/jira/browse/HDFS-8586
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula



 *{color:blue}DataNode marked as Dead{color}* 
2015-06-11 19:39:00,862 | INFO  | 
org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@28ec166e 
| BLOCK*  *removeDeadDatanode: lost heartbeat from XX.XX.39.33:25009*  | 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.removeDeadDatanode(DatanodeManager.java:584)
2015-06-11 19:39:00,863 | INFO  | 
org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@28ec166e 
| Removing a node: /default/rack3/XX.XX.39.33:25009 | 
org.apache.hadoop.net.NetworkTopology.remove(NetworkTopology.java:488)

  *{color:blue}Deadnode got Allocated{color}* 
2015-06-11 19:39:45,148 | WARN  | IPC Server handler 26 on 25000 | The cluster 
does not contain node: /default/rack3/XX.XX.39.33:25009 | 
org.apache.hadoop.net.NetworkTopology.getDistance(NetworkTopology.java:616)
2015-06-11 19:39:45,149 | WARN  | IPC Server handler 26 on 25000 | The cluster 
does not contain node: /default/rack3/XX.XX.39.33:25009 | 
org.apache.hadoop.net.NetworkTopology.getDistance(NetworkTopology.java:616)
2015-06-11 19:39:45,149 | WARN  | IPC Server handler 26 on 25000 | The cluster 
does not contain node: /default/rack3/XX.XX.39.33:25009 | 
org.apache.hadoop.net.NetworkTopology.getDistance(NetworkTopology.java:616)
2015-06-11 19:39:45,149 | WARN  | IPC Server handler 26 on 25000 | The cluster 
does not contain node: /default/rack3/XX.XX.39.33:25009 | 
org.apache.hadoop.net.NetworkTopology.getDistance(NetworkTopology.java:616)
2015-06-11 19:39:45,149 | INFO  | IPC Server handler 26 on 25000 | BLOCK* 
allocate blk_1073754030_13252{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-e8d29773-dfc2-4224-b1d6-9b0588bca55e:   
*NORMAL:XX.XX.39.33:25009|RBW],*  
ReplicaUC[[DISK]DS-f7d2ab3c-88f7-470c-9097-84387c0bec83:NORMAL:XX.XX.38.32:25009|RBW],
 
ReplicaUC[[DISK]DS-8c2a464a-ac81-4651-890a-dbfd07ddd95f:NORMAL:XX.XX.38.33:25009|RBW]]}
 for /t1._COPYING_ | 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3657)
2015-06-11 19:39:45,191 | INFO  | IPC Server handler 35 on 25000 | BLOCK* 
allocate blk_1073754031_13253{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-ed8ad579-50c0-4e3e-8780-9776531763b6:NORMAL:XX.XX.39.31:25009|RBW],
 
ReplicaUC[[DISK]DS-19ddd6da-4a3e-481a-8445-dde5c90aaff3:NORMAL:XX.XX.37.32:25009|RBW],
 ReplicaUC[[DISK]DS-4ce4ce39-4973-42ce-8c7d-cb41f899db85: 
{{NORMAL:XX.XX.37.33:25009}}   |RBW]]} for /t1._COPYING_ | 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3657)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8587) Erasure Coding: fix the copy constructor of BlockInfoStriped and BlockInfoContiguous

2015-06-12 Thread Yi Liu (JIRA)
Yi Liu created HDFS-8587:


 Summary: Erasure Coding: fix the copy constructor of 
BlockInfoStriped and BlockInfoContiguous
 Key: HDFS-8587
 URL: https://issues.apache.org/jira/browse/HDFS-8587
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu
Assignee: Yi Liu


{code}
BlockInfoStriped(BlockInfoStriped b) {
  this(b, b.getSchema());
  this.setBlockCollection(b.getBlockCollection());
}
{code}

{code}
protected BlockInfoContiguous(BlockInfoContiguous from) {
  this(from, from.getBlockCollection().getPreferredBlockReplication());
  this.triplets = new Object[from.triplets.length];
  this.setBlockCollection(from.getBlockCollection());
}
{code}

We should define a copy constructor in the {{BlockInfo}}, and call it from 
these two {{subclass}}.I also notice a NullPointerException test failure of 
{{TestBlockInfo.testCopyConstructor}} in latest branch which is related to this.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 2154 - Failure

2015-06-12 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2154/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6829 lines...]
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [ 45.948 s]
[INFO] Apache Hadoop HDFS  FAILURE [  02:40 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.054 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:41 h
[INFO] Finished at: 2015-06-12T14:16:05+00:00
[INFO] Final Memory: 55M/694M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2153
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362707 bytes
Compression is 0.0%
Took 6.4 sec
Recording test results
Updating YARN-3794
Updating HDFS-8566
Updating HDFS-8573
Updating HDFS-8583
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateFailure

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good 
datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:49533,DS-b6278782-d006-4ad9-9b27-e5358984f3f5,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:35413,DS-663663a0-79b0-4303-a08e-03be179daab0,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:49533,DS-b6278782-d006-4ad9-9b27-e5358984f3f5,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:35413,DS-663663a0-79b0-4303-a08e-03be179daab0,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:49533,DS-b6278782-d006-4ad9-9b27-e5358984f3f5,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:35413,DS-663663a0-79b0-4303-a08e-03be179daab0,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:49533,DS-b6278782-d006-4ad9-9b27-e5358984f3f5,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:35413,DS-663663a0-79b0-4303-a08e-03be179daab0,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.
at 
org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1145)
at 
org.apache.ha

Build failed in Jenkins: Hadoop-Hdfs-trunk #2154

2015-06-12 Thread Apache Jenkins Server
See 

Changes:

[wang] HDFS-8573. Move creation of restartMeta file logic from BlockReceiver to 
ReplicaInPipeline. Contributed by Eddy Xu.

[cmccabe] HDFS-8566. HDFS documentation about debug commands wrongly identifies 
them as "hdfs dfs" commands (Surendra Singh Lilhore via Colin P. McCabe)

[arp] HDFS-8583. Document that NFS gateway does not work with rpcbind on SLES 
11. (Arpit Agarwal)

[devaraj] YARN-3794. TestRMEmbeddedElector fails because of ambiguous LOG 
reference.

--
[...truncated 6636 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.297 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.103 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.678 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.345 sec - 
in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.851 sec - in 
org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.739 sec - in 
org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.007 sec - in 
org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.721 sec - in 
org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestDFSInputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.77 sec - in 
org.apache.hadoop.hdfs.TestDFSInputStream
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.408 sec - 
in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.622 sec - 
in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.215 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.558 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.321 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.683 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.992 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.353 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.216 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.073 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.682 sec - in 
org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.285 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.04 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.363 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDe

Jenkins build is back to normal : Hadoop-Hdfs-trunk-Java8 #215

2015-06-12 Thread Apache Jenkins Server
See 



[jira] [Resolved] (HDFS-8560) Document that DN max locked memory must be configured to use RAM disk

2015-06-12 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-8560.
-
  Resolution: Duplicate
Target Version/s:   (was: 2.8.0)

Included in the patch for HDFS-7164.

> Document that DN max locked memory must be configured to use RAM disk
> -
>
> Key: HDFS-8560
> URL: https://issues.apache.org/jira/browse/HDFS-8560
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> HDFS-6919 introduced the requirement that max locked memory must be 
> configured to use RAM disk storage via the LAZY_PERSIST storage policy.
> We need to document it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8588) DN should not support SPNEGO authenticator

2015-06-12 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-8588:


 Summary: DN should not support SPNEGO authenticator
 Key: HDFS-8588
 URL: https://issues.apache.org/jira/browse/HDFS-8588
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai


Currently {{HttpServer2}} initializes SPNEGO authentication filter for all 
HttpServer instances. However, DNs are not supposed to initialize any SPNEGO 
authentication handler.

The class needs to be refactor to support this use case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8487) Merge BlockInfo-related code changes from HDFS-7285 into trunk

2015-06-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-8487.
-
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed

> Merge BlockInfo-related code changes from HDFS-7285 into trunk
> --
>
> Key: HDFS-8487
> URL: https://issues.apache.org/jira/browse/HDFS-8487
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
>
> Per offline discussion with [~andrew.wang], for easier and cleaner reviewing, 
> we should probably shrink the size of the consolidated HDFS-7285 patch by 
> merging some mechanical changes that are unrelated to EC-specific logic to 
> trunk first. Those include renaming, subclassing, interfaces, and so forth. 
> This umbrella JIRA specifically aims to merge code changes around 
> {{BlockInfo}} and {{BlockInfoContiguous}} back into trunk.
> The structure of {{BlockInfo}} -related classes are shown below:
> {code}
> BlockInfo (abstract)
>/ \
> BlockInfoStriped  BlockInfoContiguous
>||
>|   BlockInfoUC  |
>|   (interface)  |
>|   / \  |
> BlockInfoStripedUC   BlockInfoContiguousUC
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8589) Fix unused imports in BPServiceActor and BlockReportLeaseManager

2015-06-12 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-8589:
--

 Summary: Fix unused imports in BPServiceActor and 
BlockReportLeaseManager
 Key: HDFS-8589
 URL: https://issues.apache.org/jira/browse/HDFS-8589
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Trivial


Fix unused imports in BPServiceActor and BlockReportLeaseManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8590) Surpress bad_cert SSLException and provide more information in the DN log

2015-06-12 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-8590:


 Summary: Surpress bad_cert SSLException and provide more 
information in the DN log
 Key: HDFS-8590
 URL: https://issues.apache.org/jira/browse/HDFS-8590
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai


The Netty server in DN throws a long list of exception when the client does not 
trust the certificate of the server.

This jira proposes to surpress the exception and to print out the origin of the 
request to ease the debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8591) Remove support for deprecated configuration key dfs.namenode.decommission.nodes.per.interval

2015-06-12 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-8591:
-

 Summary: Remove support for deprecated configuration key 
dfs.namenode.decommission.nodes.per.interval
 Key: HDFS-8591
 URL: https://issues.apache.org/jira/browse/HDFS-8591
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: hdfs-8591.001.patch

dfs.namenode.decommission.nodes.per.interval is deprecated in branch-2 and can 
be removed in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8592) SafeModeException never get unwrapped

2015-06-12 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-8592:


 Summary: SafeModeException never get unwrapped
 Key: HDFS-8592
 URL: https://issues.apache.org/jira/browse/HDFS-8592
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


{{RemoteException#unwrapRemoteException}} fails to instantiate 
{{SafeModeException}} because {{SafeModeException}} does not have the 
corresponding constructor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8593) Calculation of effective layout version mishandles comparison to current layout version in storage.

2015-06-12 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-8593:
---

 Summary: Calculation of effective layout version mishandles 
comparison to current layout version in storage.
 Key: HDFS-8593
 URL: https://issues.apache.org/jira/browse/HDFS-8593
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Chris Nauroth
Assignee: Chris Nauroth


HDFS-8432 introduced the concept of a minimum compatible layout version so that 
downgrade is applicable in a wider set of circumstances.  This includes logic 
for determining if the current layout version in storage is within the bounds 
of the minimum compatible layout version.  There is an inverted comparison in 
this logic, which can result in an incorrect calculation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)