[jira] [Resolved] (HDFS-6277) WebHdfsFileSystem#toUrl does not perform character escaping for rename

2016-09-30 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-6277.
-
Resolution: Won't Fix

This bug is present in the 1.x line, but not 2.x or 3.x.  I'm resolving this as 
Won't Fix, because 1.x is no longer under active maintenance.

> WebHdfsFileSystem#toUrl does not perform character escaping for rename 
> ---
>
> Key: HDFS-6277
> URL: https://issues.apache.org/jira/browse/HDFS-6277
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Ramya Sunil
>Assignee: Chris Nauroth
>
> Found this issue while testing HDFS-6141. WebHdfsFileSystem#toUrl  does not 
> perform character escaping for rename and causes the operation to fail. 
> This bug does not exist on 2.x
> For e.g: 
> $ hadoop dfs -rmr 'webhdfs://:/tmp/test dirname with spaces'
> Problem with Trash.Unexpected HTTP response: code=400 != 200, op=RENAME, 
> message=Bad Request. Consider using -skipTrash option
> rmr: Failed to move to trash: webhdfs://:/tmp/test dirname 
> with spaces



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-09-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/

[Sep 29, 2016 10:35:00 AM] (stevel) HADOOP-13663 Index out of range in 
SysInfoWindows. Contributed by Inigo
[Sep 29, 2016 2:00:31 PM] (jianhe) YARN-4205. Add a service for monitoring 
application life time out.
[Sep 29, 2016 3:27:17 PM] (jlowe) MAPREDUCE-6771. RMContainerAllocator sends 
container diagnostics event
[Sep 29, 2016 4:01:00 PM] (stevel) HADOOP-13164 Optimize 
S3AFileSystem::deleteUnnecessaryFakeDirectories.
[Sep 29, 2016 6:27:30 PM] (kihwal) HADOOP-13537. Support external calls in the 
RPC call queue. Contributed
[Sep 29, 2016 8:59:09 PM] (cnauroth) Revert "HADOOP-13081. add the ability to 
create multiple UGIs/subjects
[Sep 29, 2016 10:11:41 PM] (Arun Suresh) YARN-5486. Update 
OpportunisticContainerAllocatorAMService::allocate




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.ha.TestZKFailoverController 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [120K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/180/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-10934) TestDFSShell.testStat fails intermittently

2016-09-30 Thread Eric Badger (JIRA)
Eric Badger created HDFS-10934:
--

 Summary: TestDFSShell.testStat fails intermittently
 Key: HDFS-10934
 URL: https://issues.apache.org/jira/browse/HDFS-10934
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


Saw this failure in an internal build. Reran the test 30 times and it failed 
once with the same type of failure. 

{noformat}
org.junit.ComparisonFailure: Unexpected -stat output: 2016-09-30 03:48:56
2016-09-30 03:48:57
 expected:<...6
2016-09-30 03:48:5[7]
> but was:<...6
2016-09-30 03:48:5[6]
>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.apache.hadoop.hdfs.TestDFSShell.testStat(TestDFSShell.java:2082)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10935) TestFileChecksum tests are failing after HDFS-10460 (Mac only?)

2016-09-30 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-10935:
--

 Summary: TestFileChecksum tests are failing after HDFS-10460 (Mac 
only?)
 Key: HDFS-10935
 URL: https://issues.apache.org/jira/browse/HDFS-10935
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Wei-Chiu Chuang


On my Mac, TestFileChecksum has been been failing since HDFS-10460. However, 
the jenkins jobs have not reported the failures. Maybe it's an issue with my 
Mac or JDK.

9 out of 21 tests failed. 

{noformat}
java.lang.AssertionError: Checksum mismatches!

at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery(TestFileChecksum.java:227)
at 
org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery10(TestFileChecksum.java:336)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10936) allow snapshot command should not print exception trace when folder not found

2016-09-30 Thread Jagadesh Kiran N (JIRA)
Jagadesh Kiran N created HDFS-10936:
---

 Summary: allow snapshot command should not print exception trace 
when folder not found
 Key: HDFS-10936
 URL: https://issues.apache.org/jira/browse/HDFS-10936
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jagadesh Kiran N
Assignee: Jagadesh Kiran N


Allow snapshot for a folder which is not present 
 {code }./hdfs dfsadmin -allowsnapshot /kiran1  {code }

the following exception occurs

 {code }
allowsnapshot: Directory does not exist: /kiran1
at 
org.apache.hadoop.hdfs.server.namenode.INodeDirectory.valueOf(INodeDirectory.java:61)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.setSnapshottable(SnapshotManager.java:113)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.allowSnapshot(FSDirSnapshotOp.java:60)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.allowSnapshot(FSNamesystem.java:7406)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.allowSnapshot(NameNodeRpcServer.java:1766)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.allowSnapshot(ClientNamenodeProtocolServerSideTranslatorPB.java:1131)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:973)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2143)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2139)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2137)
 {code }





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Is anyone seeing this during trunk build?

2016-09-30 Thread Kihwal Lee
> or maybe replace with bcel-findbugs ?
We are not invoking this directly, but maven-project-info-reports-plugin 
does.It depends on maven-shared-jar-1.1, which uses bcel.
I tried the latest maven-project-info-reports-plugin v.2.9 and it solvesthe 
problem. The maven-shared-jar-1.2 it depends on did the workaround.
I will file a jira to update maven-project-info-reports-plugin.
Kihwal


  From: Arun Suresh 
 To: Kihwal Lee  
Cc: Ted Yu ; Hdfs-dev ; Hadoop 
Common 
 Sent: Thursday, September 29, 2016 6:58 PM
 Subject: Re: Is anyone seeing this during trunk build?
   
It looks like *org.apache.hadoop.hdfs.StripeReader* is using a Java 8
lambda expression which commons bcel is still not comfortable with.
As per https://issues.apache.org/jira/browse/BCEL-173 It should be fixed in
commons release 6.0 of bcel.
or maybe replace with bcel-findbugs ? as suggested by :
https://github.com/RichardWarburton/lambda-behave/issues/31#issuecomment-86052095

On Thu, Sep 29, 2016 at 2:01 PM, Kihwal Lee 
wrote:

> This also shows up in the precommit builds. This is not failing the build,
> so it might scroll over quickly before you realize.
> Search for ClassFormatException
> https://builds.apache.org/job/PreCommit-HDFS-Build/16928/
> artifact/patchprocess/branch-mvninstall-root.txt
>
>      From: Ted Yu 
>  To: Kihwal Lee 
> Cc: Hdfs-dev ; Hadoop Common <
> common-...@hadoop.apache.org>
>  Sent: Wednesday, September 28, 2016 7:16 PM
>  Subject: Re: Is anyone seeing this during trunk build?
>
> I used the same command but didn't see the error you saw.
>
> Here is my environment:
>
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
> MaxPermSize=512M; support was removed in 8.0
> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
> 2015-11-10T08:41:47-08:00)
> Maven home: /Users/tyu/apache-maven-3.3.9
> Java version: 1.8.0_91, vendor: Oracle Corporation
> Java home:
> /Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "mac os x", version: "10.11.3", arch: "x86_64", family: "mac"
>
> FYI
>
> On Wed, Sep 28, 2016 at 3:54 PM, Kihwal Lee 
> wrote:
>
> > I just noticed this during a trunk build. I was doing "mvn clean install
> > -DskipTests".  The build succeeds.
> > Is anyone seeing this?  I am using openjdk8u102.
> >
> >
> >
> > ===
> > [WARNING] Unable to process class org/apache/hadoop/hdfs/
> StripeReader.class
> > in JarAnalyzer File /home1/kihwal/devel/apache/
> hadoop/hadoop-hdfs-project/
> > hadoop-hdfs-client/target/hadoop-hdfs-client-3.0.0-alpha2-SNAPSHOT.jar
> > org.apache.bcel.classfile.ClassFormatException: Invalid byte tag in
> > constant pool: 18
> >    at org.apache.bcel.classfile.Constant.readConstant(Constant.java:146)
> >    at org.apache.bcel.classfile.ConstantPool.(
> ConstantPool.java:67)
> >    at org.apache.bcel.classfile.ClassParser.readConstantPool(
> > ClassParser.java:222)
> >    at org.apache.bcel.classfile.ClassParser.parse(ClassParser.java:136)
> >    at org.apache.maven.shared.jar.classes.JarClassesAnalysis.
> > analyze(JarClassesAnalysis.java:92)
> >    at org.apache.maven.report.projectinfo.dependencies.Dependencies.
> > getJarDependencyDetails(Dependencies.java:255)
> >    at org.apache.maven.report.projectinfo.dependencies.
> > renderer.DependenciesRenderer.hasSealed(DependenciesRenderer.java:1454)
> >    at org.apache.maven.report.projectinfo.dependencies.
> > renderer.DependenciesRenderer.renderSectionDependencyFileDet
> > ails(DependenciesRenderer.java:536)
> >    at org.apache.maven.report.projectinfo.dependencies.
> > renderer.DependenciesRenderer.renderBody(DependenciesRenderer.java:263)
> >    at org.apache.maven.reporting.AbstractMavenReportRenderer.render(
> > AbstractMavenReportRenderer.java:79)
> >    at org.apache.maven.report.projectinfo.DependenciesReport.
> > executeReport(DependenciesReport.java:186)
> >    at org.apache.maven.reporting.AbstractMavenReport.generate(
> > AbstractMavenReport.java:190)
> >    at org.apache.maven.report.projectinfo.AbstractProjectInfoReport.
> > execute(AbstractProjectInfoReport.java:202)
> >    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(
> > DefaultBuildPluginManager.java:101)
> >    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> > MojoExecutor.java:209)
> >    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> > MojoExecutor.java:153)
> >    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> > MojoExecutor.java:145)
> >    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.
> > buildProject(LifecycleModuleBuilder.java:84)
> >    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.
> > buildProject(LifecycleModuleBuilder.java:59)
> >    at org.apache.maven.lifecycle.internal.LifecycleStarter.
> > singleThreadedBuild(LifecycleStarter.java:183)
> >    at org.apache.maven.lifecycle.internal.LifecycleStarter.
> > execute(LifecycleStarter.java:161)
> >    at org.apac

[jira] [Created] (HDFS-10937) libhdfs++: hdfsRead return -1 at eof

2016-09-30 Thread Bob Hansen (JIRA)
Bob Hansen created HDFS-10937:
-

 Summary: libhdfs++: hdfsRead return -1 at eof
 Key: HDFS-10937
 URL: https://issues.apache.org/jira/browse/HDFS-10937
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bob Hansen


The libhdfs++ implementation of hdfsRead appears to be out-of-spec.  The header 
says it will return 0 at eof, but the current implementation returns -1 with an 
errno of 261 (invalid offset).

The basic posix-y read loop of
while ( (bytesRead = hdsfRead(...)) != 0) {...}
won't work with with libhdfs++'s hdfsRead method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10938) Ozone: SCM: Add datanode protocol

2016-09-30 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10938:
---

 Summary: Ozone: SCM: Add datanode protocol
 Key: HDFS-10938
 URL: https://issues.apache.org/jira/browse/HDFS-10938
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-7240


Adds the datanode protoc file. This file is not used as of yet. Adding this as 
a separate file to make sure it is easy to code review and discuss the protocol 
in isolation. The protocol will be used in future patches.

This file defines how a datanode communicates with SCM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2016-09-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/110/

[Sep 29, 2016 2:00:31 PM] (jianhe) YARN-4205. Add a service for monitoring 
application life time out.
[Sep 29, 2016 3:27:17 PM] (jlowe) MAPREDUCE-6771. RMContainerAllocator sends 
container diagnostics event
[Sep 29, 2016 4:01:00 PM] (stevel) HADOOP-13164 Optimize 
S3AFileSystem::deleteUnnecessaryFakeDirectories.
[Sep 29, 2016 6:27:30 PM] (kihwal) HADOOP-13537. Support external calls in the 
RPC call queue. Contributed
[Sep 29, 2016 8:59:09 PM] (cnauroth) Revert "HADOOP-13081. add the ability to 
create multiple UGIs/subjects
[Sep 29, 2016 10:11:41 PM] (Arun Suresh) YARN-5486. Update 
OpportunisticContainerAllocatorAMService::allocate
[Sep 30, 2016 9:17:30 AM] (aajisaka) HADOOP-13640. Fix findbugs warning in 
VersionInfoMojo.java. Contributed




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestEncryptedTransfer 
   hadoop.hdfs.TestEncryptionZonesWithKMS 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.timelineservice.storage.common.TestRowKeys 
   hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters 
   hadoop.yarn.server.timelineservice.storage.common.TestSeparator 
   hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestNMClient 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.TestFileChecksum 
   org.apache.hadoop.hdfs.TestWriteReadStripedFile 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.mapred.TestMRIntermediateDataEncryption 
   org.apache.hadoop.mapred.TestMROpportunisticMaps 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/110/artifact/out/patch-compile-root.txt
  [308K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/110/artifact/out/patch-compile-root.txt
  [308K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/110/artifact/out/patch-compile-root.txt
  [308K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/110/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [288K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/110/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/110/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-se

[jira] [Created] (HDFS-10939) Reduce performance penalty of encryption zones

2016-09-30 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-10939:
--

 Summary: Reduce performance penalty of encryption zones
 Key: HDFS-10939
 URL: https://issues.apache.org/jira/browse/HDFS-10939
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp


The encryption zone APIs should be optimized to extensively use IIPs to 
eliminate path resolutions.  The performance penalties incurred by common 
operations like creation of file statuses may be reduced by more extensive 
short-circuiting of EZ lookups when no EZs exist.  All file creates should not 
be subjected to the multi-stage locking performance penalty required only for 
EDEK generation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10940) Reduce performance penalty of block caching when not used

2016-09-30 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-10940:
--

 Summary: Reduce performance penalty of block caching when not used
 Key: HDFS-10940
 URL: https://issues.apache.org/jira/browse/HDFS-10940
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp


For every block location generated, the CacheManager will create a junk object 
for a hash lookup of cached locations.  If there are no cached blocks, none of 
this is required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.6.5 (RC0)

2016-09-30 Thread Ming Ma
+1

Successfully compiled a standalone HDFS app using the 2.6.5 jars extracted
from the release tar.gz.

On Thu, Sep 29, 2016 at 10:33 AM, Chris Trezzo  wrote:

> +1
>
> Thanks Sangjin!
>
> 1. Verified md5 checksums and signature on src, and release tar.gz.
> 2. Built from source.
> 3. Started up a pseudo distributed cluster.
> 4. Successfully ran a PI job.
> 5. Ran the balancer.
> 6. Inspected UI for RM, NN, JobHistory.
>
> On Tue, Sep 27, 2016 at 4:11 PM, Lei Xu  wrote:
>
> > +1
> >
> > The steps I've done:
> >
> > * Downloaded release tar and source tar, verified MD5.
> > * Run a HDFS cluster, and copy files between local filesystem and HDFS.
> >
> >
> > On Tue, Sep 27, 2016 at 1:28 PM, Sangjin Lee  wrote:
> > > Hi folks,
> > >
> > > I have created a release candidate RC0 for the Apache Hadoop 2.6.5
> > release
> > > (the next maintenance release in the 2.6.x release line). Below are the
> > > details of this release candidate:
> > >
> > > The RC is available for validation at:
> > > http://home.apache.org/~sjlee/hadoop-2.6.5-RC0/.
> > >
> > > The RC tag in git is release-2.6.5-RC0 and its git commit is
> > > 6939fc935fba5651fdb33386d88aeb8e875cf27a.
> > >
> > > The maven artifacts are staged via repository.apache.org at:
> > > https://repository.apache.org/content/repositories/
> orgapachehadoop-1048/
> > .
> > >
> > > You can find my public key at
> > > http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS.
> > >
> > > Please try the release and vote. The vote will run for the usual 5
> days.
> > > Huge thanks to Chris Trezzo for spearheading the release management and
> > > doing all the work!
> > >
> > > Thanks,
> > > Sangjin
> >
> >
> >
> > --
> > Lei (Eddy) Xu
> > Software Engineer, Cloudera
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
> >
>


[jira] [Created] (HDFS-10941) Improve BlockManager#processMisReplicatesAsync log

2016-09-30 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-10941:
-

 Summary: Improve BlockManager#processMisReplicatesAsync log
 Key: HDFS-10941
 URL: https://issues.apache.org/jira/browse/HDFS-10941
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


BlockManager#processMisReplicatesAsync is the daemon thread running inside 
namenode to handle miserplicated blocks. As shown below, it has a trace log for 
each of the block in the cluster being processed (1 blocks per iteration 
after sleep 10s). 
{code}
  MisReplicationResult res = processMisReplicatedBlock(block);
  if (LOG.isTraceEnabled()) {
LOG.trace("block " + block + ": " + res);
  }
{code}
However, it is not very useful as dumping every block in the cluster will 
overwhelm the namenode log without much useful information assuming the 
majority of the blocks are not over/under replicated. This ticket is opened to 
improve the log for easy troubleshooting of block replication related issues by:
 
1) add debug log for blocks that get under/over replicated result during 
{{processMisReplicatedBlock()}} 

2) or change to trace log for only blocks that get non-OK result during 
{{processMisReplicatedBlock()}} 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.6.5 (RC0)

2016-09-30 Thread Sangjin Lee
Thanks Ming!

I know it's getting late on Friday, but I'd like to ask you folks to take a
little time to check out the release candidate and vote on it. That would
be fantastic. Thanks!

Sangjin

On Fri, Sep 30, 2016 at 3:53 PM Ming Ma  wrote:

> +1
>
> Successfully compiled a standalone HDFS app using the 2.6.5 jars extracted
> from the release tar.gz.
>
> On Thu, Sep 29, 2016 at 10:33 AM, Chris Trezzo  wrote:
>
> > +1
> >
> > Thanks Sangjin!
> >
> > 1. Verified md5 checksums and signature on src, and release tar.gz.
> > 2. Built from source.
> > 3. Started up a pseudo distributed cluster.
> > 4. Successfully ran a PI job.
> > 5. Ran the balancer.
> > 6. Inspected UI for RM, NN, JobHistory.
> >
> > On Tue, Sep 27, 2016 at 4:11 PM, Lei Xu  wrote:
> >
> > > +1
> > >
> > > The steps I've done:
> > >
> > > * Downloaded release tar and source tar, verified MD5.
> > > * Run a HDFS cluster, and copy files between local filesystem and HDFS.
> > >
> > >
> > > On Tue, Sep 27, 2016 at 1:28 PM, Sangjin Lee  wrote:
> > > > Hi folks,
> > > >
> > > > I have created a release candidate RC0 for the Apache Hadoop 2.6.5
> > > release
> > > > (the next maintenance release in the 2.6.x release line). Below are
> the
> > > > details of this release candidate:
> > > >
> > > > The RC is available for validation at:
> > > > http://home.apache.org/~sjlee/hadoop-2.6.5-RC0/.
> > > >
> > > > The RC tag in git is release-2.6.5-RC0 and its git commit is
> > > > 6939fc935fba5651fdb33386d88aeb8e875cf27a.
> > > >
> > > > The maven artifacts are staged via repository.apache.org at:
> > > > https://repository.apache.org/content/repositories/
> > orgapachehadoop-1048/
> > > .
> > > >
> > > > You can find my public key at
> > > > http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS.
> > > >
> > > > Please try the release and vote. The vote will run for the usual 5
> > days.
> > > > Huge thanks to Chris Trezzo for spearheading the release management
> and
> > > > doing all the work!
> > > >
> > > > Thanks,
> > > > Sangjin
> > >
> > >
> > >
> > > --
> > > Lei (Eddy) Xu
> > > Software Engineer, Cloudera
> > >
> > > -
> > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


[jira] [Created] (HDFS-10942) Incorrect handling of flushing edit logs to JN

2016-09-30 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-10942:


 Summary: Incorrect handling of flushing edit logs to JN
 Key: HDFS-10942
 URL: https://issues.apache.org/jira/browse/HDFS-10942
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: s, hdfs
Reporter: Yongjun Zhang


We use EditsDoubleBuffer to handle edit logs:
{code}
/**
 * A double-buffer for edits. New edits are written into the first buffer
 * while the second is available to be flushed. Each time the double-buffer
 * is flushed, the two internal buffers are swapped. This allows edits
 * to progress concurrently to flushes without allocating new buffers each
 * time.
 */
{code}

With the following code, that flush the ready buffer, it copy the ready buffer 
to a local copy, then flush.

{code}

QuarumOutputStream (buf in the code below is an instance of EditsDoubleBuffer):

  @Override
  protected void flushAndSync(boolean durable) throws IOException {
int numReadyBytes = buf.countReadyBytes();
if (numReadyBytes > 0) {
  int numReadyTxns = buf.countReadyTxns();
  long firstTxToFlush = buf.getFirstReadyTxId();

  assert numReadyTxns > 0;

  // Copy from our double-buffer into a new byte array. This is for
  // two reasons:
  // 1) The IPC code has no way of specifying to send only a slice of
  //a larger array.
  // 2) because the calls to the underlying nodes are asynchronous, we
  //need a defensive copy to avoid accidentally mutating the buffer
  //before it is sent.
  DataOutputBuffer bufToSend = new DataOutputBuffer(numReadyBytes);
  buf.flushTo(bufToSend);
  assert bufToSend.getLength() == numReadyBytes;
  byte[] data = bufToSend.getData();
  assert data.length == bufToSend.getLength();
{code}

 The above call doesn't seem to prevent the orginal copy of the buffer inside 
buf from being swapped by  the following method. 

{code}
EditsDoubleBuffer:

 public void setReadyToFlush() {
assert isFlushed() : "previous data not flushed yet";
TxnBuffer tmp = bufReady;
bufReady = bufCurrent;
bufCurrent = tmp;
  }

{code}

Though we have some runtime assertion in the code, the assertion is not enabled 
in production, so the expected condition in the assert may be false at runtime. 
This would possibly cause a mess.  When any condition is not as expected by the 
assertion, it seems a real exception should be thrown instead.

So two issues in summary:
- How we synchronize between the flush and the swap of the two buffers
- Whether we should throw real exception instead of the assert that's disabled 
at runtime normally.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.6.5 (RC0)

2016-09-30 Thread Chris Douglas
+1 Verified checksum and signature. Unpacked the jar, started
single-node HDFS cluster, did some cursory checks.

Read through the commit log from 2.6.4; particularly happy to see
HADOOP-12893. -C

On Tue, Sep 27, 2016 at 1:28 PM, Sangjin Lee  wrote:
> Hi folks,
>
> I have created a release candidate RC0 for the Apache Hadoop 2.6.5 release
> (the next maintenance release in the 2.6.x release line). Below are the
> details of this release candidate:
>
> The RC is available for validation at:
> http://home.apache.org/~sjlee/hadoop-2.6.5-RC0/.
>
> The RC tag in git is release-2.6.5-RC0 and its git commit is
> 6939fc935fba5651fdb33386d88aeb8e875cf27a.
>
> The maven artifacts are staged via repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1048/.
>
> You can find my public key at
> http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS.
>
> Please try the release and vote. The vote will run for the usual 5 days.
> Huge thanks to Chris Trezzo for spearheading the release management and
> doing all the work!
>
> Thanks,
> Sangjin

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.6.5 (RC0)

2016-09-30 Thread John Zhuge
Hi Sangjin:

Thanks for preparing the release. Notice these lines
in CHANGES-YARN-2.6.5-RC0.txt:

<<< HEAD
> YARN-4434. NodeManager Disk Checker parameter documentation is not
> correct.
> (Weiwei Yang via aajisaka)
>
> Release 2.6.2 - 2015-10-28
> ===
> YARN-3893. Both RM in active state when Admin#transitionToActive
> failure
> from refeshAll() (Bibin A Chundatt via rohithsharmaks)
>
> Release 2.7.1 - 2015-07-06
> >>> 7d6687f... YARN-3893. Both RM in active state when
> Admin#transitionToActive failure from refeshAll() (Bibin A Chundatt via
> rohithsharmaks)


Thanks,

John Zhuge
Software Engineer, Cloudera

On Tue, Sep 27, 2016 at 1:28 PM, Sangjin Lee  wrote:

> Hi folks,
>
> I have created a release candidate RC0 for the Apache Hadoop 2.6.5 release
> (the next maintenance release in the 2.6.x release line). Below are the
> details of this release candidate:
>
> The RC is available for validation at:
> http://home.apache.org/~sjlee/hadoop-2.6.5-RC0/.
>
> The RC tag in git is release-2.6.5-RC0 and its git commit is
> 6939fc935fba5651fdb33386d88aeb8e875cf27a.
>
> The maven artifacts are staged via repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1048/.
>
> You can find my public key at
> http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS.
>
> Please try the release and vote. The vote will run for the usual 5 days.
> Huge thanks to Chris Trezzo for spearheading the release management and
> doing all the work!
>
> Thanks,
> Sangjin
>


Re: [VOTE] Release Apache Hadoop 2.6.5 (RC0)

2016-09-30 Thread John Zhuge
+1 (non-binding)

- Verified checksums and signatures of the tarballs
- Built source with Java 1.8.0_101 on Centos 7.2 with native
- Built source with Java 1.7.0_79 on Mac
- Verified licenses and notices using verify-license-notice.sh
- Deployed a pseudo cluster

- Ran basic dfs, distcp, ACL, webhdfs commands
- Ran MapReduce workcount example
- Ran balancer


Thanks,
John

John Zhuge
Software Engineer, Cloudera

On Fri, Sep 30, 2016 at 6:15 PM, Chris Douglas 
wrote:

> +1 Verified checksum and signature. Unpacked the jar, started
> single-node HDFS cluster, did some cursory checks.
>
> Read through the commit log from 2.6.4; particularly happy to see
> HADOOP-12893. -C
>
> On Tue, Sep 27, 2016 at 1:28 PM, Sangjin Lee  wrote:
> > Hi folks,
> >
> > I have created a release candidate RC0 for the Apache Hadoop 2.6.5
> release
> > (the next maintenance release in the 2.6.x release line). Below are the
> > details of this release candidate:
> >
> > The RC is available for validation at:
> > http://home.apache.org/~sjlee/hadoop-2.6.5-RC0/.
> >
> > The RC tag in git is release-2.6.5-RC0 and its git commit is
> > 6939fc935fba5651fdb33386d88aeb8e875cf27a.
> >
> > The maven artifacts are staged via repository.apache.org at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1048/
> .
> >
> > You can find my public key at
> > http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS.
> >
> > Please try the release and vote. The vote will run for the usual 5 days.
> > Huge thanks to Chris Trezzo for spearheading the release management and
> > doing all the work!
> >
> > Thanks,
> > Sangjin
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HDFS-10943) rollEditLog expects empty EditsDoubleBuffer.bufCurrent which is not guaranteed

2016-09-30 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-10943:


 Summary: rollEditLog expects empty EditsDoubleBuffer.bufCurrent 
which is not guaranteed
 Key: HDFS-10943
 URL: https://issues.apache.org/jira/browse/HDFS-10943
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yongjun Zhang


Per the following trace stack:
{code}
FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: finalize log 
segment 10562075963, 10562174157 failed for required journal 
(JournalAndStream(mgr=QJM to [0.0.0.1:8485, 0.0.0.2:8485, 0.0.0.3:8485, 
0.0.0.4:8485, 0.0.0.5:8485], stream=QuorumOutputStream starting at txid 
10562075963))
java.io.IOException: FSEditStream has 49708 bytes still to be flushed and 
cannot be closed.
at 
org.apache.hadoop.hdfs.server.namenode.EditsDoubleBuffer.close(EditsDoubleBuffer.java:66)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.close(QuorumOutputStream.java:65)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalAndStream.closeStream(JournalSet.java:115)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet$4.apply(JournalSet.java:235)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.finalizeLogSegment(JournalSet.java:231)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1243)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1172)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1243)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:6437)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1002)
at 
org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:142)
at 
org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12025)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
2016-09-23 21:40:59,618 WARN 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Aborting 
QuorumOutputStream starting at txid 10562075963
{code}

The exception is from  EditsDoubleBuffer
{code}
 public void close() throws IOException {
Preconditions.checkNotNull(bufCurrent);
Preconditions.checkNotNull(bufReady);

int bufSize = bufCurrent.size();
if (bufSize != 0) {
  throw new IOException("FSEditStream has " + bufSize
  + " bytes still to be flushed and cannot be closed.");
}

IOUtils.cleanup(null, bufCurrent, bufReady);
bufCurrent = bufReady = null;
  }
{code}

We can see that FSNamesystem.rollEditLog expects  EditsDoubleBuffer.bufCurrent 
to be empty.

Edits are recorded via FSEditLog$logSync, which does:
{code}
   * The data is double-buffered within each edit log implementation so that
   * in-memory writing can occur in parallel with the on-disk writing.
   *
   * Each sync occurs in three steps:
   *   1. synchronized, it swaps the double buffer and sets the isSyncRunning
   *  flag.
   *   2. unsynchronized, it flushes the data to storage
   *   3. synchronized, it resets the flag and notifies anyone waiting on the
   *  sync.
   *
   * The lack of synchronization on step 2 allows other threads to continue
   * to write into the memory buffer while the sync is in progress.
   * Because this step is unsynchronized, actions that need to avoid
   * concurrency with sync() should be synchronized and also call
   * waitForSyncToFinish() before assuming they are running alone.
   */
{code}

We can see that step 2 is on-purposely not synchronized to let other threads to 
write into the memory buffer, presumbaly EditsDoubleBuffer.bufCurrent. This 
means that the EditsDoubleBuffer.bufCurrent  can be non-empty when logSync is 
done.

Now if rollEditLog happens, the above exception happens.

Another interesting observation is, the size of the EditsDoubleBuffer can be as 
large as "private int outputBufferCapacity = 512 * 1024;", which means a lot of 
edits could get buffered before they are flushed to JNs. 

How can rollEdit operation expect the EditsDoubleBuffer.buf

Re: [VOTE] Release Apache Hadoop 2.6.5 (RC0)

2016-09-30 Thread Brahma Reddy Battula
Thanks Sangjin!


+1 ( non- binding)


--Downloaded the source and complied

--Installed HA cluster

--Verified basic fsshell commands

---Did the regression on issues which is handled by me

--Ran pi,terasort Slive jobs, all works fine.


Happy to see HDFS-9530.


Read through the commit log,Seems to be following commits are missed in 
changes.txt.


HDFS-10653

,HADOOP-13290

,HDFS-10544,

HADOOP-13255

,HADOOP-13189



--Brahma Reddy Battula



From: sjl...@gmail.com  on behalf of Sangjin Lee 

Sent: Wednesday, September 28, 2016 1:58 AM
To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Subject: [VOTE] Release Apache Hadoop 2.6.5 (RC0)

Hi folks,

I have created a release candidate RC0 for the Apache Hadoop 2.6.5 release
(the next maintenance release in the 2.6.x release line). Below are the
details of this release candidate:

The RC is available for validation at:
http://home.apache.org/~sjlee/hadoop-2.6.5-RC0/.

The RC tag in git is release-2.6.5-RC0 and its git commit is
6939fc935fba5651fdb33386d88aeb8e875cf27a.

The maven artifacts are staged via repository.apache.org at:
https://repository.apache.org/content/repositories/orgapachehadoop-1048/.

You can find my public key at
http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS.

Please try the release and vote. The vote will run for the usual 5 days.
Huge thanks to Chris Trezzo for spearheading the release management and
doing all the work!

Thanks,
Sangjin


[jira] [Created] (HDFS-10944) Change javadoc "Allow Snapshot " to " DisAllow Snapshot" for disallowSnapshot function

2016-09-30 Thread Jagadesh Kiran N (JIRA)
Jagadesh Kiran N created HDFS-10944:
---

 Summary: Change javadoc "Allow Snapshot " to " DisAllow Snapshot" 
for disallowSnapshot function 
 Key: HDFS-10944
 URL: https://issues.apache.org/jira/browse/HDFS-10944
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jagadesh Kiran N
Assignee: Jagadesh Kiran N
Priority: Minor


in DFSadmin.java ,the javadoc for Disallow snapshot is not proper

{code}
 /**
   * Allow snapshot on a directory.
   * Usage: hdfs dfsadmin -disallowSnapshot snapshotDir
   * @param argv List of of command line parameters.
   * @exception IOException
   */
{code}

Change the Allow Snapshot to Disallow Snapshot



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-10923) Make InstrumentedLock require ReentrantLock

2016-09-30 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDFS-10923:
--

> Make InstrumentedLock require ReentrantLock
> ---
>
> Key: HDFS-10923
> URL: https://issues.apache.org/jira/browse/HDFS-10923
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10923.01.patch, HDFS-10923.02.patch, 
> HDFS-10923.03.patch
>
>
> Make InstrumentedLock use ReentrantLock instead of Lock, so nested 
> acquire/release calls can be instrumented correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org