[jira] [Created] (HDFS-15472) Erasure Coding: Support fallback read when zero copy is not supported

2020-07-15 Thread dzcxzl (Jira)
dzcxzl created HDFS-15472:
-

 Summary: Erasure Coding: Support fallback read when zero copy is 
not supported
 Key: HDFS-15472
 URL: https://issues.apache.org/jira/browse/HDFS-15472
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: dzcxzl


[HDFS-8203|https://issues.apache.org/jira/browse/HDFS-8203] 

ec does not support zeor copy read, but currently does not support fallback 
read, it will throw an exception.
{code:java}
Caused by: java.lang.UnsupportedOperationException: Not support enhanced byte 
buffer access.
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.read(DFSStripedInputStream.java:524)
at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:188)
at 
org.apache.hadoop.hive.shims.ZeroCopyShims$ZeroCopyAdapter.readBuffer(ZeroCopyShims.java:79)
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [RESULT][VOTE] Rlease Apache Hadoop-3.3.0

2020-07-15 Thread Stephen O'Donnell
Hi All,

Sorry for being a bit late to this, but I wonder if we have a potential
blocker to this release.

In Cloudera we have recently encountered a serious dataloss issue in HDFS
surrounding snapshots. To hit the dataloss issue, you must have HDFS-13101
and HDFS-15012 on the build (which branch-3.3.0 does). To prevent it, you
must also have HDFS-15313 and unfortunately, this was only committed to
trunk, so we need to cherry-pick it down the active branches.

With data loss being a serious issue, should we pull this Jira into
branch-3.3.0 and cut a new release candidate?

Thanks,

Stephen.

On Tue, Jul 14, 2020 at 1:22 PM Brahma Reddy Battula 
wrote:

> Hi All,
>
> With 8 binding and 11 non-binding +1s and no -1s the vote for Apache
> hadoop-3.3.0 Release
> passes.
>
> Thank you everybody for contributing to the release, testing, and voting.
>
> Special thanks whoever verified the ARM Binary as this is the first release
> to support the ARM in hadoop.
>
>
> Binding +1s
>
> =
> Akira Ajisaka
> Vinayakumar B
> Inigo Goiri
> Surendra Singh Lilhore
> Masatake Iwasaki
> Rakesh Radhakrishnan
> Eric Badger
> Brahma Reddy Battula
>
> Non-binding +1s
>
> =
> Zhenyu Zheng
> Sheng Liu
> Yikun Jiang
> Tianhua huang
> Ayush Saxena
> Hemanth Boyina
> Bilwa S T
> Takanobu Asanuma
> Xiaoqiao He
> CR Hota
> Gergely Pollak
>
> I'm going to work on staging the release.
>
>
> The voting thread is:
>
>  https://s.apache.org/hadoop-3.3.0-Release-vote-thread
>
>
>
> --Brahma Reddy Battula
>


Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-07-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/748/

[Jul 14, 2020 6:33:56 PM] (ebadger) YARN-10348. Allow RM to always cancel 
tokens after app completes.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 664] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 741] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 359] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 505] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 505] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 664] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 741] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 359] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
Col

Re: [RESULT][VOTE] Rlease Apache Hadoop-3.3.0

2020-07-15 Thread Brahma Reddy Battula
Hi Stephen,

thanks for bringing this to my attention.

Looks it's late..I pushed the release tag ( which can't be reverted) and
updated the release date in the jira.

Can we plan this next release near future..?


On Wed, Jul 15, 2020 at 5:25 PM Stephen O'Donnell
 wrote:

> Hi All,
>
> Sorry for being a bit late to this, but I wonder if we have a potential
> blocker to this release.
>
> In Cloudera we have recently encountered a serious dataloss issue in HDFS
> surrounding snapshots. To hit the dataloss issue, you must have HDFS-13101
> and HDFS-15012 on the build (which branch-3.3.0 does). To prevent it, you
> must also have HDFS-15313 and unfortunately, this was only committed to
> trunk, so we need to cherry-pick it down the active branches.
>
> With data loss being a serious issue, should we pull this Jira into
> branch-3.3.0 and cut a new release candidate?
>
> Thanks,
>
> Stephen.
>
> On Tue, Jul 14, 2020 at 1:22 PM Brahma Reddy Battula 
> wrote:
>
> > Hi All,
> >
> > With 8 binding and 11 non-binding +1s and no -1s the vote for Apache
> > hadoop-3.3.0 Release
> > passes.
> >
> > Thank you everybody for contributing to the release, testing, and voting.
> >
> > Special thanks whoever verified the ARM Binary as this is the first
> release
> > to support the ARM in hadoop.
> >
> >
> > Binding +1s
> >
> > =
> > Akira Ajisaka
> > Vinayakumar B
> > Inigo Goiri
> > Surendra Singh Lilhore
> > Masatake Iwasaki
> > Rakesh Radhakrishnan
> > Eric Badger
> > Brahma Reddy Battula
> >
> > Non-binding +1s
> >
> > =
> > Zhenyu Zheng
> > Sheng Liu
> > Yikun Jiang
> > Tianhua huang
> > Ayush Saxena
> > Hemanth Boyina
> > Bilwa S T
> > Takanobu Asanuma
> > Xiaoqiao He
> > CR Hota
> > Gergely Pollak
> >
> > I'm going to work on staging the release.
> >
> >
> > The voting thread is:
> >
> >  https://s.apache.org/hadoop-3.3.0-Release-vote-thread
> >
> >
> >
> > --Brahma Reddy Battula
> >
>


-- 



--Brahma Reddy Battula


[jira] [Reopened] (HDFS-15313) Ensure inodes in active filesytem are not deleted during snapshot delete

2020-07-15 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell reopened HDFS-15313:
--

Reopening to see if we can trigger jenkins on the 3.1 and 2.10 patches.

> Ensure inodes in active filesytem are not deleted during snapshot delete
> 
>
> Key: HDFS-15313
> URL: https://issues.apache.org/jira/browse/HDFS-15313
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
> Attachments: HDFS-15313.000.patch, HDFS-15313.001.patch, 
> HDFS-15313.branch-2.8.patch, HDFS-15313.branch-3.1.patch
>
>
> After HDFS-13101, it was observed in one of our customer deployments that 
> delete snapshot ends up cleaning up inodes from active fs which can be 
> referred from only one snapshot as the isLastReference() check for the parent 
> dir introduced in HDFS-13101 may return true in certain cases. The aim of 
> this Jira to add a check to ensure if the Inodes are being referred in the 
> active fs , should not get deleted while deletion of snapshot happens.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15473) Add listSnapshots command to list all snapshots

2020-07-15 Thread Hemanth Boyina (Jira)
Hemanth Boyina created HDFS-15473:
-

 Summary: Add listSnapshots command to list all snapshots
 Key: HDFS-15473
 URL: https://issues.apache.org/jira/browse/HDFS-15473
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Hemanth Boyina
Assignee: Hemanth Boyina


In a cluster where snapshots are highly used , it will benefit to have a 
command to list all the snapshots under root



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-07-15 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/204/

[Jul 14, 2020 1:07:27 PM] (noreply) HADOOP-16998. WASB : 
NativeAzureFsOutputStream#close() throwing IllegalArgumentException (#2073)
[Jul 14, 2020 1:42:12 PM] (noreply) HDFS-15371. Nonstandard characters exist in 
NameNode.java (#2032)
[Jul 14, 2020 2:27:35 PM] (Steve Loughran) HADOOP-17022. Tune 
S3AFileSystem.listFiles() API.
[Jul 14, 2020 6:22:25 PM] (Erik Krogen) HADOOP-17127. Use RpcMetrics.TIMEUNIT 
to initialize rpc queueTime and processingTime. Contributed by Jim Brennan.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-yarn-project 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputSt