Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-08-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/

[Aug 14, 2016 10:01:21 PM] (varunsaxena) YARN-5491. Fix random failure of




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.yarn.logaggregation.TestAggregatedLogFormat 
   
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestYarnClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/diff-compile-javac-root.txt
  [172K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/134/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-10761) libhdfs++: Fix broken logic in HA retry policy

2016-08-15 Thread James Clampffer (JIRA)
James Clampffer created HDFS-10761:
--

 Summary: libhdfs++: Fix broken logic in HA retry policy
 Key: HDFS-10761
 URL: https://issues.apache.org/jira/browse/HDFS-10761
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


Two issues in the HA policy:

1) There's logic to guard against HDFS-8161, but it wasn't checking how many 
failovers had already happened.  It'd get stuck alternating between nodes 
forever.

2) Switched a ">" for a "<" so when max failovers was reached it'd keep trying 
anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10762) Pass IIP for file status related methods

2016-08-15 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-10762:
--

 Summary: Pass IIP for file status related methods
 Key: HDFS-10762
 URL: https://issues.apache.org/jira/browse/HDFS-10762
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs
Reporter: Daryn Sharp
Assignee: Daryn Sharp


The frequently called file status methods will not require path re-resolves if 
the IIP is passed down the call stack.  The code can be simplified further if 
the IIP tracks if the original path was a reserved raw path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10763) Open files can leak permanently due to inconsistent lease update

2016-08-15 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-10763:
-

 Summary: Open files can leak permanently due to inconsistent lease 
update
 Key: HDFS-10763
 URL: https://issues.apache.org/jira/browse/HDFS-10763
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.4, 2.7.3
Reporter: Kihwal Lee
Priority: Critical


This can heppen during {{commitBlockSynchronization()}} or a client gives up on 
closing a file after retries.
>From {{finalizeINodeFileUnderConstruction()}}, the lease is removed first and 
>then the inode is turned into the closed state. But if any block is not in 
>COMPLETE state, 
{{INodeFile#assertAllBlocksComplete()}} will throw an exception. This will 
cause the lease is removed from the lease manager, but not from the inode. 
Since the lease manager does not have a lease for the file, no lease recovery 
will happen for this file. Moreover, this broken state is persisted and 
reconstructed through saving and loading of fsimage. Since no replication is 
scheduled for the blocks for the file, this can cause a data loss and also 
block decommissioning of datanode.

The lease cannot be manually recovered either. It fails with
{noformat}
...AlreadyBeingCreatedException): Failed to RECOVER_LEASE /xyz/xyz for user1 on
 0.0.0.1 because the file is under construction but no leases found.
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2950)
...
{noformat}

When a client retries {{close()}}, the same inconsistent state is created, but 
it can work in the next time since {{checkLease()}} only looks at the inode, 
not the lease manager in this case. The close behavior is different if 
HDFS-8999 is activated by setting 
{{dfs.namenode.file.close.num-committed-allowed}} to 1 (unlikely) or 2 (never). 

In principle, the under-construction feature of an inode and the lease in the 
lease manager should never go out of sync. The fix involves two parts.
1) Prevent inconsistent lease updates. We can achieve this by calling 
{{removeLease()}} after checking the block state. 
2) Avoid reconstructing inconsistent lease states from a fsimage. 1) alone does 
not correct the existing inconsistencies surviving through fsimages.  This can 
be done during fsimage loading time by making sure a corresponding lease exists 
for each inode that are with the underconstruction feature. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [Release thread] 2.6.5 release activities

2016-08-15 Thread Allen Wittenauer

> On Aug 12, 2016, at 8:19 AM, Junping Du  wrote:
> 
>  In this community, we are so aggressive to drop Java 7 support in 3.0.x 
> release. Here, why we are so conservative to keep releasing new bits to 
> support Java 6?

I don't view a group of people putting bug fixes into a micro release 
as particularly conservative.  If a group within the community wasn't 
interested in doing it, 2.6.5 wouldn't be happening.

But let's put the releases into context, because I think it tells a 
more interesting story.

* hadoop 2.6.x = EOLed JREs (6,7) 
* hadoop 2.7 -> hadoop 2.x = transitional (7,8)
* hadoop 3.x = JRE 8
* hadoop 4.x = JRE 9 

There are groups of people still using JDK6 and they want bug fixes in 
a maintenance release.  Boom, there's 2.6.x.

Hadoop 3.x has been pushed off for years for "reasons".  So we still 
have releases coming off of branch-2.  If 2.7 had been released as 3.x, this 
chart would look less weird. But it wasn't thus 2.x has this weird wart in the 
middle of that supports JDK7 and JDK8.  Given the public policy and roadmaps of 
at least one major vendor at the time of this writing, we should expect to see 
JDK7 support for at least the next two years after 3.x appears. Bang, there's 
2.x, where x is some large number.

Then there is the future.  People using JRE 8 want to use newer 
dependencies.  A reasonable request. Some of these dependency updates won't 
work with JRE 7.   We can't do that in hadoop 2.x in any sort of compatible way 
without breaking the universe. (Tons of JIRAs on this point.) This means we can 
only do it in 3.x (re: Hadoop Compatibility Guidelines).  Kapow, there's 3.x

The log4j community has stated that v1 won't work with JDK9. In turn, 
this means we'll need to upgrade to v2 at some point.  Upgrading to v2 will 
break the log4j properties file (and maybe other things?). Another incompatible 
change and it likely won't appear until Apache Hadoop v4 unless someone takes 
the initiative to fix it before v3 hits store shelves.  This makes JDK9 the 
likely target for Apache Hadoop v4.  

Having major release cadences tied to JRE updates isn't necessarily a 
bad thing and definitely forces the community to a) actually stop beating 
around the bush on majors and b) actually makes it relatively easy to determine 
what the schedule looks like to some degree.





-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.3 RC1

2016-08-15 Thread Vinod Kumar Vavilapalli
Thanks Marco. It was a Thursday late-night slip-up.

Fixed the dates and replaced the bits, so the voting can continue.

FYI, they aren’t binding though - as it all depends on how the release voting 
goes. One should usually only trust the release-date published on the website.

Thanks
+Vinod

> On Aug 13, 2016, at 1:35 PM, Marco Zühlke  > wrote:
> 
> Hi Vinod,
> 
> I'm not sure if this is relevant, but you changed the release date in the 
> CHANGES.txt 
> 
>  files to 2016-09-19.
> I guess you have meant 2016-08-19.
> 
> See: 
> https://github.com/apache/hadoop/commit/5474c9e736d4c44a603a3f6749130b67cd4da52f
>  
> 
> 
> 
> Thanks,
> Marco
> 
> 
> 
> 2016-08-12 18:45 GMT+02:00 Vinod Kumar Vavilapalli  >:
> Hi all,
> 
> I've created a release candidate RC1 for Apache Hadoop 2.7.3.
> 
> As discussed before, this is the next maintenance release to follow up 2.7.2.
> 
> The RC is available for validation at: 
> http://home.apache.org/~vinodkv/hadoop-2.7.3-RC1/ 
>  
>  >
> 
> The RC tag in git is: release-2.7.3-RC1
> 
> The maven artifacts are available via repository.apache.org 
>   > at 
> https://repository.apache.org/content/repositories/orgapachehadoop-1045/ 
>  
>  >
> 
> The release-notes are inside the tar-balls at location 
> hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
> this at home.apache.org/~vinodkv/hadoop-2.7.3-RC1/releasenotes.html 
>  
>  > for 
> your quick perusal.
> 
> As you may have noted,
>  - few issues with RC0 forced a RC1 [1]
>  - a very long fix-cycle for the License & Notice issues (HADOOP-12893) 
> caused 2.7.3 (along with every other Hadoop release) to slip by quite a bit. 
> This release's related discussion thread is linked below: [2].
> 
> Please try the release and vote; the vote will run for the usual 5 days.
> 
> Thanks,
> Vinod
> 
> [1] [VOTE] Release Apache Hadoop 2.7.3 RC0: 
> https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/index.html#26106 
>  
>  >
> [2]: 2.7.3 release plan: 
> https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg24439.html 
>  
>  >
> 



[jira] [Reopened] (HDFS-7933) fsck should also report decommissioning replicas.

2016-08-15 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang reopened HDFS-7933:
-

> fsck should also report decommissioning replicas. 
> --
>
> Key: HDFS-7933
> URL: https://issues.apache.org/jira/browse/HDFS-7933
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jitendra Nath Pandey
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HDFS-7933-branch-2.7.00.patch, HDFS-7933.00.patch, 
> HDFS-7933.01.patch, HDFS-7933.02.patch, HDFS-7933.03.patch
>
>
> Fsck doesn't count replicas that are on decommissioning nodes. If a block has 
> all replicas on the decommissioning nodes, it will be marked as missing, 
> which is alarming for the admins, although the system will replicate them 
> before nodes are decommissioned.
> Fsck output should also show decommissioning replicas along with the live 
> replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.3 RC1

2016-08-15 Thread Jason Lowe
+1 (binding)
- Verified signatures and digests- Built from source with native support- 
Deployed a pseudo-distributed cluster- Ran some sample jobs
Jason

  From: Vinod Kumar Vavilapalli 
 To: "common-...@hadoop.apache.org" ; 
hdfs-dev@hadoop.apache.org; yarn-...@hadoop.apache.org; 
"mapreduce-...@hadoop.apache.org"  
Cc: Vinod Kumar Vavilapalli 
 Sent: Friday, August 12, 2016 11:45 AM
 Subject: [VOTE] Release Apache Hadoop 2.7.3 RC1
   
Hi all,

I've created a release candidate RC1 for Apache Hadoop 2.7.3.

As discussed before, this is the next maintenance release to follow up 2.7.2.

The RC is available for validation at: 
http://home.apache.org/~vinodkv/hadoop-2.7.3-RC1/ 


The RC tag in git is: release-2.7.3-RC1

The maven artifacts are available via repository.apache.org 
 at 
https://repository.apache.org/content/repositories/orgapachehadoop-1045/ 


The release-notes are inside the tar-balls at location 
hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
this at home.apache.org/~vinodkv/hadoop-2.7.3-RC1/releasenotes.html 
 for your 
quick perusal.

As you may have noted,
 - few issues with RC0 forced a RC1 [1]
 - a very long fix-cycle for the License & Notice issues (HADOOP-12893) caused 
2.7.3 (along with every other Hadoop release) to slip by quite a bit. This 
release's related discussion thread is linked below: [2].

Please try the release and vote; the vote will run for the usual 5 days.

Thanks,
Vinod

[1] [VOTE] Release Apache Hadoop 2.7.3 RC0: 
https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/index.html#26106 

[2]: 2.7.3 release plan: 
https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg24439.html 


   

[jira] [Created] (HDFS-10764) Fix INodeFile#getBlocks to not return null

2016-08-15 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-10764:


 Summary: Fix INodeFile#getBlocks to not return null
 Key: HDFS-10764
 URL: https://issues.apache.org/jira/browse/HDFS-10764
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Not all callers of INodeFile#getBlocks check for null. e.g.

{code}
  public final QuotaCounts storagespaceConsumedContiguous(
  BlockStoragePolicy bsp) {
...
  // Collect all distinct blocks
  Set allBlocks = new HashSet<>(Arrays.asList(getBlocks()));
{code}

We can either fix each caller or alternatively fix getBlocks to never return 
null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10736) Format disk balance command's output info

2016-08-15 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu resolved HDFS-10736.
---
Resolution: Duplicate

> Format disk balance command's output info
> -
>
> Key: HDFS-10736
> URL: https://issues.apache.org/jira/browse/HDFS-10736
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: diskbalancer, hdfs
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>
> When users run command of disk balance as below
> {quote}
> hdfs diskbalancer
> {quote}
> it doesn't print the detail information of options.
> Also when users run disk balance command in a wrong way, the output info is 
> not consistent with other commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10677) Über-jira: Enhancements to NNThroughputBenchmark tool

2016-08-15 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu resolved HDFS-10677.
--
   Resolution: Fixed
Fix Version/s: 2.8.0

> Über-jira: Enhancements to NNThroughputBenchmark tool
> -
>
> Key: HDFS-10677
> URL: https://issues.apache.org/jira/browse/HDFS-10677
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: benchmarks, tools
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10765) Log a messages if finalize metadata hasn't been done, but customer kicked off HDFS Balancer

2016-08-15 Thread David Wang (JIRA)
David Wang created HDFS-10765:
-

 Summary: Log a messages if finalize metadata hasn't been done, but 
customer kicked off HDFS Balancer
 Key: HDFS-10765
 URL: https://issues.apache.org/jira/browse/HDFS-10765
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer & mover
Reporter: David Wang


-- The finalize metadata hasn't been done yet, but there is no obvious message 
to inform that if started Balancer. Currently its just responding -7. Below is 
the logs.
--
2016-06-21 13:57:26,130 INFO [main] balancer.Balancer 
(Balancer.java:logUtilizationCollection(362)) - 1 over-utilized: 
[10.10.120.32:50010:DISK] 
2016-06-21 13:57:26,131 INFO [main] balancer.Balancer 
(Balancer.java:logUtilizationCollection(362)) - 5 underutilized: 
[10.10.120.93:50010:DISK, 10.10.120.97:50010:DISK, 10.10.120.95:50010:DISK, 
10.10.120.96:50010:DISK, 10.10.120.94:50010:DISK] 
2016-06-21 13:57:26,133 INFO [main] balancer.Balancer 
(Balancer.java:runOneIteration(526)) - Need to move 1.09 TB to make the cluster 
balanced. 
Jun 21, 2016 1:57:26 PM 0 0 B 1.09 TB -1 B 
Jun 21, 2016 1:57:26 PM Balancing took 1.519 seconds 

-- Copied the comments here.
// Should not run the balancer during an unfinalized upgrade, since moved
// blocks are not deleted on the source datanode.

http://opengrok.sjc.cloudera.com:8080/source/xref/CDH-5.5.1/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java#530



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10765) Log a messages if finalize metadata hasn't been done, but customer kicked off HDFS Balancer

2016-08-15 Thread David Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Wang resolved HDFS-10765.
---
Resolution: Fixed

> Log a messages if finalize metadata hasn't been done, but customer kicked off 
> HDFS Balancer
> ---
>
> Key: HDFS-10765
> URL: https://issues.apache.org/jira/browse/HDFS-10765
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Reporter: David Wang
>
> -- The finalize metadata hasn't been done yet, but there is no obvious 
> message to inform that if started Balancer. Currently its just responding -7. 
> Below is the logs.
> --
> 2016-06-21 13:57:26,130 INFO [main] balancer.Balancer 
> (Balancer.java:logUtilizationCollection(362)) - 1 over-utilized: 
> [10.10.120.32:50010:DISK] 
> 2016-06-21 13:57:26,131 INFO [main] balancer.Balancer 
> (Balancer.java:logUtilizationCollection(362)) - 5 underutilized: 
> [10.10.120.93:50010:DISK, 10.10.120.97:50010:DISK, 10.10.120.95:50010:DISK, 
> 10.10.120.96:50010:DISK, 10.10.120.94:50010:DISK] 
> 2016-06-21 13:57:26,133 INFO [main] balancer.Balancer 
> (Balancer.java:runOneIteration(526)) - Need to move 1.09 TB to make the 
> cluster balanced. 
> Jun 21, 2016 1:57:26 PM 0 0 B 1.09 TB -1 B 
> Jun 21, 2016 1:57:26 PM Balancing took 1.519 seconds 
> -- Copied the comments here.
> // Should not run the balancer during an unfinalized upgrade, since moved
> // blocks are not deleted on the source datanode.
> http://opengrok.sjc.cloudera.com:8080/source/xref/CDH-5.5.1/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java#530



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org