Re: Updated 2.8.0-SNAPSHOT artifact

2016-10-25 Thread Akira Ajisaka

It's almost a year since branch-2.8 has cut.
I'm thinking we need to release 2.8.0 ASAP.

According to the following list, there are 5 blocker and 6 critical issues.
https://issues.apache.org/jira/issues/?filter=12334985

Regards,
Akira

On 10/18/16 10:47, Brahma Reddy Battula wrote:

Hi Vinod,

Any plan on first RC for branch-2.8 ? I think, it has been long time.




--Brahma Reddy Battula

-Original Message-
From: Vinod Kumar Vavilapalli [mailto:vino...@apache.org]
Sent: 20 August 2016 00:56
To: Jonathan Eagles
Cc: common-...@hadoop.apache.org
Subject: Re: Updated 2.8.0-SNAPSHOT artifact

Jon,

That is around the time when I branched 2.8, so I guess you were getting 
SNAPSHOT artifacts till then from the branch-2 nightly builds.

If you need it, we can set up SNAPSHOT builds. Or just wait for the first RC, 
which is around the corner.

+Vinod


On Jul 28, 2016, at 4:27 PM, Jonathan Eagles  wrote:

Latest snapshot is uploaded in Nov 2015, but checkins are still coming
in quite frequently.
https://repository.apache.org/content/repositories/snapshots/org/apach
e/hadoop/hadoop-yarn-api/

Are there any plans to start producing updated SNAPSHOT artifacts for
current hadoop development lines?



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11049) The description of dfs.block.replicator.classname is not clearly

2016-10-25 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-11049:


 Summary: The description of dfs.block.replicator.classname is not 
clearly
 Key: HDFS-11049
 URL: https://issues.apache.org/jira/browse/HDFS-11049
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0-alpha1
Reporter: Yiqun Lin
Assignee: Yiqun Lin
Priority: Minor


Now the description of the property {{dfs.block.replicator.classname}} seems 
not clearly. The current description:
{code}

  dfs.block.replicator.classname  
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault
  
Class representing block placement policy for non-striped files.
  

{code}
It seems too simple and the users will not get the other available block 
placement policies and its usage. For example in HDFS-7541, it introduced a 
new policy {{BlockPlacementPolicyWithUpgradeDomain}}. This policy can be good 
used in rolling upgrade opeations.

This JIRA foucus on making a improvement for this in file {{hdfs-default.xml}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-10-25 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/205/

[Oct 24, 2016 1:28:44 PM] (kihwal) HDFS-11042. Add missing cleanupSSLConfig() 
call for tests that use
[Oct 24, 2016 7:46:54 PM] (cdouglas) HADOOP-13626. Remove distcp dependency on 
FileStatus serialization
[Oct 24, 2016 10:14:51 PM] (kihwal) HDFS-10997. Reduce number of path resolving 
methods. Contributed by
[Oct 25, 2016 1:07:43 AM] (aajisaka) HDFS-11046. Duplicate '-' in the daemon 
log name.
[Oct 25, 2016 1:59:51 AM] (subru) YARN-5711. Propogate exceptions back to 
client when using hedging RM
[Oct 25, 2016 4:22:34 AM] (cnauroth) HADOOP-13727. S3A: Reduce high number of 
connections to EC2 Instance
[Oct 25, 2016 4:54:06 AM] (cnauroth) HADOOP-12774. s3a should use 
UGI.getCurrentUser.getShortname() for
[Oct 25, 2016 5:19:23 AM] (kasha) YARN-5754. Null check missing for earliest in 
FifoPolicy. (Yufei Gu via
[Oct 25, 2016 6:55:55 AM] (aajisaka) HADOOP-13514. Upgrade maven surefire 
plugin to 2.19.1. Contributed by




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-kms 
   Exception is caught when Exception is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map) At KMS.java:is not 
thrown in org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map) At 
KMS.java:[line 169] 
   Exception is caught when Exception is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.generateEncryptedKeys(String, 
String, int) At KMS.java:is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.generateEncryptedKeys(String, 
String, int) At KMS.java:[line 501] 

Failed junit tests :

   hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.cli.TestLogsCLI 
   hadoop.fs.azure.TestNativeAzureFileSystemMocked 
   hadoop.fs.azure.TestFileSystemOperationExceptionMessage 
   hadoop.fs.azure.TestBlobMetadata 
   hadoop.fs.azure.TestNativeAzureFileSystemBlockLocations 
   hadoop.fs.azure.TestOutOfBandAzureBlobOperations 
   hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck 
   hadoop.fs.azure.TestNativeAzureFileSystemLive 
   hadoop.fs.azure.TestFileSystemOperationExceptionHandling 
   hadoop.fs.azure.TestNativeAzureFSPageBlobLive 
   hadoop.fs.azure.TestWasbUriAndConfiguration 
   hadoop.fs.azure.TestFileSystemOperationsExceptionHandlingMultiThreaded 
   hadoop.fs.azure.TestWasbFsck 
   hadoop.fs.azure.TestNativeAzureFileSystemContractPageBlobLive 
   hadoop.fs.azure.TestNativeAzureFileSystemContractLive 
   hadoop.fs.azure.TestNativeAzureFileSystemConcurrency 
   hadoop.fs.azure.metrics.TestNativeAzureFileSystemMetricsSystem 
   hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked 
   hadoop.fs.azure.TestAzureConcurrentOutOfBandIo 
   hadoop.fs.azure.metrics.TestAzureFileSystemInstrumentation 
   hadoop.fs.azure.TestNativeAzureFileSystemAppend 
   hadoop.fs.azure.TestOutOfBandAzureBlobOperationsLive 
   hadoop.fs.azure.TestNativeAzureFileSystemAtomicRenameDirList 
   hadoop.fs.azure.TestFileSystemOperationsWithThreads 
   hadoop.fs.azure.TestBlobDataValidation 
   hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite 
   hadoop.fs.azure.TestBlobTypeSpeedDifference 
   hadoop.fs.azure.TestContainerChecks 
   hadoop.fs.azure.TestAzureFileSystemErrorConditions 
   hadoop.fs.azure.TestNativeAzureFileSystemClientLogging 
   hadoop.fs.azure.TestNativeAzureFileSystemContractMocked 
   hadoop.fs.azure.TestNativeAzureFileSystemContractEmulator 

Timed out junit tests :

   org.apache.hadoop.hdfs.TestModTime 
   org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead 
   org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager 
   org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicy 
   org.apache.hadoop.hdfs.TestRead 
   
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   org.apache.hadoop.hdfs.TestDataStream 
   org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy 
   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/205/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/h

[jira] [Created] (HDFS-11050) Change log level to 'warn' when ssl initialization fails and defaults to DEFAULT_TIMEOUT_CONN_CONFIGURATOR

2016-10-25 Thread Kuhu Shukla (JIRA)
Kuhu Shukla created HDFS-11050:
--

 Summary: Change log level to 'warn' when ssl initialization fails 
and defaults to DEFAULT_TIMEOUT_CONN_CONFIGURATOR
 Key: HDFS-11050
 URL: https://issues.apache.org/jira/browse/HDFS-11050
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kuhu Shukla
Assignee: Kuhu Shukla


In URLConnectionFactory, initialization failure is caught, logged as a debug 
and then init-ed to default connection silently. This JIRA changes the log 
level warn to cover such cases and warn against this failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11051) Test Balancer behavior when some block moves are slow

2016-10-25 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-11051:


 Summary: Test Balancer behavior when some block moves are slow
 Key: HDFS-11051
 URL: https://issues.apache.org/jira/browse/HDFS-11051
 Project: Hadoop HDFS
  Issue Type: Test
  Components: balancer & mover
Reporter: Zhe Zhang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11052) TestPersistBlocks#TestRestartDfsWithFlush flaky failure

2016-10-25 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-11052:


 Summary: TestPersistBlocks#TestRestartDfsWithFlush flaky failure
 Key: HDFS-11052
 URL: https://issues.apache.org/jira/browse/HDFS-11052
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaobing Zhou


 In this [Jenkins 
run|https://builds.apache.org/job/PreCommit-HDFS-Build/17253/testReport/org.apache.hadoop.hdfs/TestPersistBlocks/TestRestartDfsWithFlush],
 it fails on a cause which seems an DNS issue while trying to read response 
from connection.
{noformat}
java.io.EOFException: End of File Exception between local host is: 
"48dcc03d04a1/172.17.0.3"; destination host is: "localhost":39481; : 
java.io.EOFException; For more details see:  
http://wiki.apache.org/hadoop/EOFException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:815)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:779)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1485)
at org.apache.hadoop.ipc.Client.call(Client.java:1427)
at org.apache.hadoop.ipc.Client.call(Client.java:1337)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy19.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:453)
at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
at com.sun.proxy.$Proxy20.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:985)
at 
org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1769)
at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1579)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:685)
Caused by: java.io.EOFException: null
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at 
org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1784)
at 
org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1155)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1052)

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2016-10-25 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/135/

[Oct 24, 2016 1:28:44 PM] (kihwal) HDFS-11042. Add missing cleanupSSLConfig() 
call for tests that use
[Oct 24, 2016 7:46:54 PM] (cdouglas) HADOOP-13626. Remove distcp dependency on 
FileStatus serialization
[Oct 24, 2016 10:14:51 PM] (kihwal) HDFS-10997. Reduce number of path resolving 
methods. Contributed by
[Oct 25, 2016 1:07:43 AM] (aajisaka) HDFS-11046. Duplicate '-' in the daemon 
log name.
[Oct 25, 2016 1:59:51 AM] (subru) YARN-5711. Propogate exceptions back to 
client when using hedging RM
[Oct 25, 2016 4:22:34 AM] (cnauroth) HADOOP-13727. S3A: Reduce high number of 
connections to EC2 Instance
[Oct 25, 2016 4:54:06 AM] (cnauroth) HADOOP-12774. s3a should use 
UGI.getCurrentUser.getShortname() for
[Oct 25, 2016 5:19:23 AM] (kasha) YARN-5754. Null check missing for earliest in 
FifoPolicy. (Yufei Gu via
[Oct 25, 2016 6:55:55 AM] (aajisaka) HADOOP-13514. Upgrade maven surefire 
plugin to 2.19.1. Contributed by




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeLifeline 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.timelineservice.storage.common.TestRowKeys 
   hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters 
   hadoop.yarn.server.timelineservice.storage.common.TestSeparator 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.cli.TestLogsCLI 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.fs.azure.TestNativeAzureFileSystemAtomicRenameDirList 
   hadoop.fs.azure.TestFileSystemOperationsWithThreads 
   hadoop.fs.azure.TestFileSystemOperationExceptionHandling 
   hadoop.fs.azure.TestNativeAzureFileSystemContractEmulator 
   hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite 
   hadoop.fs.azure.TestNativeAzureFileSystemAppend 
   hadoop.fs.azure.TestNativeAzureFileSystemBlockLocations 
   hadoop.fs.azure.TestOutOfBandAzureBlobOperations 
   hadoop.fs.azure.metrics.TestNativeAzureFileSystemMetricsSystem 
   hadoop.fs.azure.TestBlobMetadata 
   hadoop.fs.azure.TestNativeAzureFileSystemMocked 
   hadoop.fs.azure.TestNativeAzureFileSystemContractLive 
   hadoop.fs.azure.TestFileSystemOperationExceptionMessage 
   hadoop.fs.azure.TestNativeAzureFileSystemContractPageBlobLive 
   hadoop.fs.azure.TestContainerChecks 
   hadoop.fs.azure.TestNativeAzureFileSystemClientLogging 
   hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck 
   hadoop.fs.azure.TestWasbFsck 
   hadoop.fs.azure.TestNativeAzureFSPageBlobLive 
   hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked 
   hadoop.fs.azure.TestNativeAzureFileSystemConcurrency 
   hadoop.fs.azure.TestBlobTypeSpeedDifference 
   hadoop.fs.azure.TestNativeAzureFileSystemContractMocked 
   hadoop.fs.azure.TestNativeAzureFileSystemLive 
   hadoop.fs.azure.TestBlobDataValidation 
   hadoop.fs.azure.TestAzureFileSystemErrorConditions 
   ha

[jira] [Resolved] (HDFS-10638) Modifications to remove the assumption that StorageLocation is associated with java.io.File in Datanode.

2016-10-25 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu resolved HDFS-10638.
--
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2

+1 . Thanks for the good work!

I re-run the failed test and it passes on my laptop. So I commit the patch to 
trunk.

> Modifications to remove the assumption that StorageLocation is associated 
> with java.io.File in Datanode.
> 
>
> Key: HDFS-10638
> URL: https://issues.apache.org/jira/browse/HDFS-10638
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10638.001.patch, HDFS-10638.002.patch, 
> HDFS-10638.003.patch, HDFS-10638.004.patch, HDFS-10638.005.patch
>
>
> Changes to ensure that {{StorageLocation}} need not be associated with a 
> {{java.io.File}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11053) Unnecessary superuser check in versionRequest()

2016-10-25 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-11053:
-

 Summary: Unnecessary superuser check in versionRequest()
 Key: HDFS-11053
 URL: https://issues.apache.org/jira/browse/HDFS-11053
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee


The {{versionRequest()}} call does not return any sensitive information.  It is 
mainly used for sanity checks.   The presence of {{checkSuperuserPrivilege()}} 
forces users to run datanode as a hdfs superuser.

In secure setup, a keytab obtained from a compromised datanode can allow the 
intruder to gain hdfs superuser privilege.  We should allow datanodes to be run 
as non-hdfs-superuser by removing {{checkSuperuserPrivilege()}} from 
{{versionRequest()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11054) Suppress verbose log message in BlockPlacementPolicyDefault

2016-10-25 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-11054:


 Summary: Suppress verbose log message in 
BlockPlacementPolicyDefault
 Key: HDFS-11054
 URL: https://issues.apache.org/jira/browse/HDFS-11054
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arpit Agarwal
Assignee: Chen Liang


We saw an instance where this log message was generated millions of times in a 
short time interval, slowing the NameNode to a crawl. It can probably be 
changed to log at DEBUG level instead.

{code}
  if (cur == null) {
LOG.warn("No excess replica can be found. excessTypes: "+excessTypes+
". moreThanOne: "+moreThanOne+". exactlyOne: "+exactlyOne+".");
break;
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Updated 2.8.0-SNAPSHOT artifact

2016-10-25 Thread Karthik Kambatla
Is there value in releasing current branch-2.8? Aren't we better off
re-cutting the branch off of branch-2?

On Tue, Oct 25, 2016 at 12:20 AM, Akira Ajisaka 
wrote:

> It's almost a year since branch-2.8 has cut.
> I'm thinking we need to release 2.8.0 ASAP.
>
> According to the following list, there are 5 blocker and 6 critical issues.
> https://issues.apache.org/jira/issues/?filter=12334985
>
> Regards,
> Akira
>
>
> On 10/18/16 10:47, Brahma Reddy Battula wrote:
>
>> Hi Vinod,
>>
>> Any plan on first RC for branch-2.8 ? I think, it has been long time.
>>
>>
>>
>>
>> --Brahma Reddy Battula
>>
>> -Original Message-
>> From: Vinod Kumar Vavilapalli [mailto:vino...@apache.org]
>> Sent: 20 August 2016 00:56
>> To: Jonathan Eagles
>> Cc: common-...@hadoop.apache.org
>> Subject: Re: Updated 2.8.0-SNAPSHOT artifact
>>
>> Jon,
>>
>> That is around the time when I branched 2.8, so I guess you were getting
>> SNAPSHOT artifacts till then from the branch-2 nightly builds.
>>
>> If you need it, we can set up SNAPSHOT builds. Or just wait for the first
>> RC, which is around the corner.
>>
>> +Vinod
>>
>> On Jul 28, 2016, at 4:27 PM, Jonathan Eagles  wrote:
>>>
>>> Latest snapshot is uploaded in Nov 2015, but checkins are still coming
>>> in quite frequently.
>>> https://repository.apache.org/content/repositories/snapshots/org/apach
>>> e/hadoop/hadoop-yarn-api/
>>>
>>> Are there any plans to start producing updated SNAPSHOT artifacts for
>>> current hadoop development lines?
>>>
>>
>>
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>>
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>>
>
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HDFS-11055) Update log4j.properties for httpfs to imporve test logging

2016-10-25 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-11055:
--

 Summary: Update log4j.properties for httpfs to imporve test logging
 Key: HDFS-11055
 URL: https://issues.apache.org/jira/browse/HDFS-11055
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: httpfs
Affects Versions: 0.23.1
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


I am debugging a httpfs issue but existing log4j.properties does not show 
execution logs in editors such as IntelliJ or in Jenkins. This makes debugging 
impossible.

File this jira to improve this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11056) Concurrent append and read operations lead to checksum error

2016-10-25 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-11056:
--

 Summary: Concurrent append and read operations lead to checksum 
error
 Key: HDFS-11056
 URL: https://issues.apache.org/jira/browse/HDFS-11056
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, httpfs
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


If there are two clients, one of them open-append-close a file continuously, 
while the other open-read-close the same file continuously, the reader 
eventually gets a checksum error in the data read.

On my local Mac, it takes a few minutes to produce the error. This happens to 
httpfs clients, but there's no reason not believe this happens to any append 
clients.

I have a unit test that demonstrates the checksum error. Will attach later.

Relevant log:
{quote}
2016-10-25 15:34:45,153 INFO  audit - allowed=true  ugi=weichiu 
(auth:SIMPLE)   ip=/127.0.0.1   cmd=opensrc=/tmp/bar.txt
dst=nullperm=null   proto=rpc
2016-10-25 15:34:45,155 INFO  DataNode - Receiving 
BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 src: 
/127.0.0.1:51130 dest: /127.0.0.1:50131
2016-10-25 15:34:45,155 INFO  FsDatasetImpl - Appending to FinalizedReplica, 
blk_1073741825_1182, FINALIZED
  getNumBytes() = 182
  getBytesOnDisk()  = 182
  getVisibleLength()= 182
  getVolume()   = 
/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1
  getBlockURI() = 
file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-837130339-172.16.1.88-1477434851452/current/finalized/subdir0/subdir0/blk_1073741825
2016-10-25 15:34:45,167 INFO  DataNode - opReadBlock 
BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 received exception 
java.io.IOException: No data exists for block 
BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
2016-10-25 15:34:45,167 WARN  DataNode - DatanodeRegistration(127.0.0.1:50131, 
datanodeUuid=41c96335-5e4b-4950-ac22-3d21b353abb8, infoPort=50133, 
infoSecurePort=0, ipcPort=50134, 
storageInfo=lv=-57;cid=testClusterID;nsid=1472068852;c=1477434851452):Got 
exception while serving 
BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 to /127.0.0.1:51121
java.io.IOException: No data exists for block 
BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
at java.lang.Thread.run(Thread.java:745)
2016-10-25 15:34:45,168 INFO  FSNamesystem - 
updatePipeline(blk_1073741825_1182, newGS=1183, newLength=182, 
newNodes=[127.0.0.1:50131], client=DFSClient_NONMAPREDUCE_-1743096965_197)
2016-10-25 15:34:45,168 ERROR DataNode - 127.0.0.1:50131:DataXceiver error 
processing READ_BLOCK operation  src: /127.0.0.1:51121 dst: /127.0.0.1:50131
java.io.IOException: No data exists for block 
BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockInputStream(FsDatasetImpl.java:773)
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:400)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
at java.lang.Thread.run(Thread.java:745)
2016-10-25 15:34:45,168 INFO  FSNamesystem - updatePipeline(blk_1073741825_1182 
=> blk_1073741825_1183) success
2016-10-25 15:34:45,170 WARN  DFSClient - Found Checksum error for 
BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 from 
DatanodeInfoWithStorage[127.0.0.1:50131,DS-a1878418-4f7f-4fc9-b3f7-d7ed780b5373,DISK]
 at 0
2016-10-25 15:34:45,170 WARN  DFSClient - No live nodes contain block 
BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 after checking nodes 
= 
[DatanodeInfoWithStorage[127.0.0.1:50131,DS-a1878418-4f7f-4fc9-b3f7-d7ed780b5373,DISK]],
 ignoredNodes = null
2016-10-25 15:34:45,170 INFO  DFSClient - Could not obtain 
BP-837130339-172.16.1.88-1477434851452:blk_1073741825_1182 from any node:  No 
live nodes contain current block Block locations:

Re: Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2016-10-25 Thread Wei-Chiu Chuang
It seems many tests are failing after HADOOP-13514. The failed tests are 
reproducible locally.


> On Oct 25, 2016, at 11:53 AM, Apache Jenkins Server 
>  wrote:
> 
> For more details, see 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/135/
> 
> [Oct 24, 2016 1:28:44 PM] (kihwal) HDFS-11042. Add missing cleanupSSLConfig() 
> call for tests that use
> [Oct 24, 2016 7:46:54 PM] (cdouglas) HADOOP-13626. Remove distcp dependency 
> on FileStatus serialization
> [Oct 24, 2016 10:14:51 PM] (kihwal) HDFS-10997. Reduce number of path 
> resolving methods. Contributed by
> [Oct 25, 2016 1:07:43 AM] (aajisaka) HDFS-11046. Duplicate '-' in the daemon 
> log name.
> [Oct 25, 2016 1:59:51 AM] (subru) YARN-5711. Propogate exceptions back to 
> client when using hedging RM
> [Oct 25, 2016 4:22:34 AM] (cnauroth) HADOOP-13727. S3A: Reduce high number of 
> connections to EC2 Instance
> [Oct 25, 2016 4:54:06 AM] (cnauroth) HADOOP-12774. s3a should use 
> UGI.getCurrentUser.getShortname() for
> [Oct 25, 2016 5:19:23 AM] (kasha) YARN-5754. Null check missing for earliest 
> in FifoPolicy. (Yufei Gu via
> [Oct 25, 2016 6:55:55 AM] (aajisaka) HADOOP-13514. Upgrade maven surefire 
> plugin to 2.19.1. Contributed by
> 
> 
> 
> 
> -1 overall
> 
> 
> The following subsystems voted -1:
>compile unit
> 
> 
> The following subsystems voted -1 but
> were configured to be filtered/ignored:
>cc javac
> 
> 
> The following subsystems are considered long running:
> (runtime bigger than 1h  0m  0s)
>unit
> 
> 
> Specific tests:
> 
>Failed junit tests :
> 
>   hadoop.hdfs.server.datanode.TestDataNodeLifeline 
>   hadoop.hdfs.web.TestWebHdfsTimeouts 
>   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
>   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
>   hadoop.yarn.server.timeline.TestRollingLevelDB 
>   hadoop.yarn.server.timeline.TestTimelineDataManager 
>   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
>   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
>   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
>   
> hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
>   hadoop.yarn.server.timelineservice.storage.common.TestRowKeys 
>   hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters 
>   hadoop.yarn.server.timelineservice.storage.common.TestSeparator 
>   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
>   hadoop.yarn.server.resourcemanager.TestRMRestart 
>   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
>   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
>   hadoop.yarn.server.TestContainerManagerSecurity 
>   hadoop.yarn.client.cli.TestLogsCLI 
>   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
>   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
>   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
>   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage 
>   
> hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
>  
>   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
>   
> hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl
>  
>   
> hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
>  
>   
> hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
>   hadoop.yarn.applications.distributedshell.TestDistributedShell 
>   hadoop.mapred.TestShuffleHandler 
>   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
>   hadoop.fs.azure.TestNativeAzureFileSystemAtomicRenameDirList 
>   hadoop.fs.azure.TestFileSystemOperationsWithThreads 
>   hadoop.fs.azure.TestFileSystemOperationExceptionHandling 
>   hadoop.fs.azure.TestNativeAzureFileSystemContractEmulator 
>   hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite 
>   hadoop.fs.azure.TestNativeAzureFileSystemAppend 
>   hadoop.fs.azure.TestNativeAzureFileSystemBlockLocations 
>   hadoop.fs.azure.TestOutOfBandAzureBlobOperations 
>   hadoop.fs.azure.metrics.TestNativeAzureFileSystemMetricsSystem 
>   hadoop.fs.azure.TestBlobMetadata 
>   hadoop.fs.azure.TestNativeAzureFileSystemMocked 
>   hadoop.fs.azure.TestNativeAzureFileSystemContractLive 
>   hadoop.fs.azure.TestFileSystemOperationExceptionMessage 
>   hadoop.fs.azure.TestNativeAzureFileSystemContractPageBlobLive 
>   hadoop.fs.azure.TestContainerChecks 
>   hadoop.fs.azure.TestNativeAzureFileSystemClientLogging 
>   hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck 
>   hadoop.fs.azure.TestWasbFsck 
>   hadoop.fs.azure.TestNativeAzureFSPageBlobLive 
>   hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked 
>   hadoop.fs.azure.T

[jira] [Created] (HDFS-11057) Implement 'df' for View

2016-10-25 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11057:
-

 Summary: Implement 'df' for View
 Key: HDFS-11057
 URL: https://issues.apache.org/jira/browse/HDFS-11057
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Manoj Govindassamy






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11058) Implement 'hadoop fs -df' command for ViewFileSystem

2016-10-25 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11058:
-

 Summary: Implement 'hadoop fs -df' command for ViewFileSystem   
 Key: HDFS-11058
 URL: https://issues.apache.org/jira/browse/HDFS-11058
 Project: Hadoop HDFS
  Issue Type: Task
Affects Versions: 3.0.0-alpha1
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy


Df command doesn't seem to work well with ViewFileSystem. It always reports 
used data as 0. Here is the client mount table configuration I am using against 
a federated clusters of 2 NameNodes and 2 DataNoes. 

{code}
  1 
  2 
  3   
  4 fs.defaultFS
  5 viewfs://ClusterX/
  6   
  ..
 11   
 12 fs.default.name
 13 viewfs://ClusterX/
 14   
 ..
 23   
 24 fs.viewfs.mounttable.ClusterX.link./nn0
 25 hdfs://127.0.0.1:50001/
 26   
 27   
 28 fs.viewfs.mounttable.ClusterX.link./nn1
 29 hdfs://127.0.0.1:51001/
 30   
 31   
 32 fs.viewfs.mounttable.ClusterX.link./nn2
 33 hdfs://127.0.0.1:52001/nn2
 34   
 35   
 36 fs.viewfs.mounttable.ClusterX.link./nn3
 37 hdfs://127.0.0.1:52001/nn3
 38   
 39   
 40 fs.viewfs.mounttable.ClusterY.linkMergeSlash
 41 hdfs://127.0.0.1:50001/
 42   
 43 
{code}


{{Df}} command always reports Size/Available as 8.0E and the usage as 0 for any 
federated cluster. 

{noformat}
# hadoop fs -fs viewfs://ClusterX/ -df  /
Filesystem Size  UsedAvailable  Use%
viewfs://ClusterX/  9223372036854775807 0  92233720368547758070%

# hadoop fs -fs viewfs://ClusterX/ -df  -h /
Filesystem   Size  Used  Available  Use%
viewfs://ClusterX/  8.0 E 0  8.0 E0%

# hadoop fs -fs viewfs://ClusterY/ -df  -h /
Filesystem   Size  Used  Available  Use%
viewfs://ClusterY/  8.0 E 0  8.0 E0%
{noformat}

Whereas {{Du}} command seems to work as expected even with ViewFileSystem.

{noformat}
# hadoop fs -fs viewfs://ClusterY/ -du -h /
10.6 K  31.8 K  /build.log.16y
0   0   /user

# hadoop fs -fs viewfs://ClusterX/ -du -h /
10.6 K  31.8 K  /nn0
0   0   /nn1
20.2 K  35.8 K  /nn3
40.6 K  34.3 K  /nn4

{noformat}






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11057) Implement 'df' for View

2016-10-25 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy resolved HDFS-11057.
---
Resolution: Duplicate
  Assignee: Manoj Govindassamy

> Implement 'df' for View
> ---
>
> Key: HDFS-11057
> URL: https://issues.apache.org/jira/browse/HDFS-11057
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer

2016-10-25 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou reopened HDFS-11038:
--

Reopened this ticket since HDFS-9462 will be deprecated in that some of work 
overlaps with Query command, however, the work supporting running multiple 
commands sequentially in one setup is quite useful, and will be tracked here. 

> DiskBalancer: support running multiple commands under one setup of disk 
> balancer
> 
>
> Key: HDFS-11038
> URL: https://issues.apache.org/jira/browse/HDFS-11038
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Disk balancer follows/reuses one rule designed by HDFS balancer, that is, 
> only one instance is allowed to run at the same time. This is correct in 
> production system to avoid any inconsistencies, but it's not ideal to write 
> and run unit tests. For example, it should be allowed run plan, execute, scan 
> commands under one setup of disk balancer. One instance rule will throw 
> exception by complaining 'Another instance is running'. In such a case, 
> there's no way to do a full life cycle tests which involves a sequence of 
> commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11059) TestWebHdfsTimeouts fails due to null SocketTimeoutException

2016-10-25 Thread John Zhuge (JIRA)
John Zhuge created HDFS-11059:
-

 Summary: TestWebHdfsTimeouts fails due to null 
SocketTimeoutException
 Key: HDFS-11059
 URL: https://issues.apache.org/jira/browse/HDFS-11059
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0-alpha2
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


TestWebHdfsTimeouts expects SocketTimeoutException with "connect timed out" or 
"Read timed out" message but fails when encountering "null" message sometimes. 
Occurred 4 out of 100 tests.

{{SocksSocketImpl#remainingMillis}} may send null SocketTimeoutException:
{code}
private static int remainingMillis(long deadlineMillis) throws IOException {
if (deadlineMillis == 0L)
return 0;

final long remaining = deadlineMillis - System.currentTimeMillis();
if (remaining > 0)
return (int) remaining;

throw new SocketTimeoutException();   ==
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11060) make DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED configurable

2016-10-25 Thread Lantao Jin (JIRA)
Lantao Jin created HDFS-11060:
-

 Summary: make DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED configurable
 Key: HDFS-11060
 URL: https://issues.apache.org/jira/browse/HDFS-11060
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Lantao Jin
Priority: Minor


Current, the easiest way to determine which blocks is missing is using NN web 
UI or JMX. Unfortunately, because the 
DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED=100 is hard code in FSNamesystem, only 
100 missing blocks can be return by UI and JMX. Even the result of URL 
"https://nn:50070/fsck?listcorruptfileblocks=1&path=%2F"; is limited by this 
hard code value too.
I did know FSCK can return more than 100 result but due to the security 
reason(with kerberos), it is very hard to involve to costumer programs and 
scripts.

So I think it should add a configurable var "maxCorruptFileBlocksReturned" to 
fix above case.
If community also think it's worth to do, I will patch this. If not, please 
feel free to tell me the reason. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org