[jira] [Created] (HDFS-10867) Block Bit Field Allocation of Provided Storage

2016-09-16 Thread Ewan Higgs (JIRA)
Ewan Higgs created HDFS-10867:
-

 Summary: Block Bit Field Allocation of Provided Storage
 Key: HDFS-10867
 URL: https://issues.apache.org/jira/browse/HDFS-10867
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs
Reporter: Ewan Higgs


We wish to design and implement the following related features for provided 
storage:
# Dynamic mounting of provided storage within a Namenode (mount, unmount)
# Mount multiple provided storage systems on a single Namenode.
# Support updates to the provided storage system without having to regenerate 
an fsimg.

A mount in the namespace addresses a corresponding set of block data. When 
unmounted, any block data associated with the mount becomes invalid and 
(eventually) unaddressable in HDFS. As with erasure-coded blocks, efficient 
unmounting requires that all blocks with that attribute be identifiable by the 
block management layer

In this subtask, we focus on changes and conventions to the block management 
layer. Namespace operations are covered in a separate subtask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-09-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/

[Sep 15, 2016 2:15:11 PM] (Arun Suresh) YARN-5620. Core changes in NodeManager 
to support re-initialization of
[Sep 15, 2016 2:42:34 PM] (aajisaka) HADOOP-13616. Broken code snippet area in 
Hadoop Benchmarking.
[Sep 15, 2016 5:18:56 PM] (sjlee) Revert "HADOOP-13410. RunJar adds the content 
of the jar twice to the
[Sep 15, 2016 10:25:47 PM] (arp) HDFS-9895. Remove unnecessary conf cache from 
DataNode. Contributed by




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.yarn.server.nodemanager.TestDefaultContainerExecutor 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/166/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Reopened] (HDFS-9895) Remove unnecessary conf cache from DataNode

2016-09-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDFS-9895:
-

> Remove unnecessary conf cache from DataNode
> ---
>
> Key: HDFS-9895
> URL: https://issues.apache.org/jira/browse/HDFS-9895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9895-HDFS-9000-branch-2.003.patch, 
> HDFS-9895-HDFS-9000.002.patch, HDFS-9895-HDFS-9000.003.patch, 
> HDFS-9895.000.patch, HDFS-9895.001.patch
>
>
> Since DataNode inherits ReconfigurableBase with Configured as base class 
> where configuration is maintained, DataNode#conf should be removed for the 
> purpose of brevity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2016-09-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/96/

[Sep 15, 2016 10:25:47 PM] (arp) HDFS-9895. Remove unnecessary conf cache from 
DataNode. Contributed by
[Sep 16, 2016 7:08:47 AM] (aajisaka) HDFS-10862. Typos in 4 log messages. 
Contributed by Mehran Hassani.




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.ipc.TestRPCWaitForProxy 
   hadoop.hdfs.TestDFSRemove 
   hadoop.hdfs.server.datanode.TestDataNodeLifeline 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestDefaultContainerExecutor 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.timelineservice.storage.common.TestRowKeys 
   hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters 
   hadoop.yarn.server.timelineservice.storage.common.TestSeparator 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.mapred.TestMRIntermediateDataEncryption 
   org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers 
   org.apache.hadoop.mapred.TestMROpportunisticMaps 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/96/artifact/out/patch-compile-root.txt
  [308K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/96/artifact/out/patch-compile-root.txt
  [308K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/96/artifact/out/patch-compile-root.txt
  [308K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/96/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [120K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/96/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [196K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/96/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/96/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/96/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/96/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [72K]

[jira] [Created] (HDFS-10868) Remove stray references to DFS_HDFS_BLOCKS_METADATA_ENABLED

2016-09-16 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-10868:
--

 Summary: Remove stray references to 
DFS_HDFS_BLOCKS_METADATA_ENABLED
 Key: HDFS-10868
 URL: https://issues.apache.org/jira/browse/HDFS-10868
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0-alpha1
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial


We missed a few stray references to this config key when removing this API, 
let's clean it up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10777) DataNode should report&remove volume failures if DU cannot access files

2016-09-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-10777.

Resolution: Invalid

Close this jira as invalid and I'll file an improvement jira to add logging or 
metric when DataNode disks become flaky.

> DataNode should report&remove volume failures if DU cannot access files
> ---
>
> Key: HDFS-10777
> URL: https://issues.apache.org/jira/browse/HDFS-10777
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10777.01.patch
>
>
> HADOOP-12973 refactored DU and makes it pluggable. The refactory has a 
> side-effect that if DU encounters an exception, the exception is caught, 
> logged and ignored, essentially fixes HDFS-9908 (in which case runaway 
> exceptions prevent DataNodes from handshaking with NameNodes).
> However, this "fix" is not good, in the sense that if the disk is bad, there 
> is no immediate action made by the DataNode other than logging the exception. 
> Existing {{FsDatasetSpi#checkDataDir}} has been reduced to only check a few 
> number of directories blindly. If a disk goes bad, it is often possible that 
> only a few files are bad initially and that by checking only a small number 
> of directories it is easy to overlook the degraded disk.
> I propose: in addition to logging the exception, DataNode should proactively 
> verify the files are not accessible, remove the volume, and make the failure 
> visible by showing it in JMX, so that administrators can spot the failure via 
> monitoring systems.
> A different fix, based on HDFS-9908, is needed before Hadoop 2.8.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [Release thread] 2.6.5 release activities

2016-09-16 Thread Chris Trezzo
We have now cut branch-2.6.5.

On Wed, Sep 14, 2016 at 4:38 PM, Sangjin Lee  wrote:

> We ported 16 issues to branch-2.6. We will go ahead and start the release
> process, including cutting the release branch. If you have any critical
> change that should be made part of 2.6.5, please reach out to us and commit
> the changes. Thanks!
>
> Sangjin
>
> On Mon, Sep 12, 2016 at 3:24 PM, Sangjin Lee  wrote:
>
>> Thanks Chris!
>>
>> I'll help Chris to get those JIRAs marked in his spreadsheet committed.
>> We'll cut the release branch shortly after that. If you have any critical
>> change that should be made part of 2.6.5 (CVE patches included), please
>> reach out to us and commit the changes. If all things go well, we'd like to
>> cut the branch in a few days.
>>
>> Thanks,
>> Sangjin
>>
>> On Fri, Sep 9, 2016 at 1:24 PM, Chris Trezzo  wrote:
>>
>>> Hi all,
>>>
>>> I wanted to give an update on the Hadoop 2.6.5 release efforts.
>>>
>>> Here is what has been done so far:
>>>
>>> 1. I have gone through all of the potential backports and recorded the
>>> commit hashes for each of them from the branch that seems the most
>>> appropriate (i.e. if there was a backport to 2.7.x then I used the hash
>>> from the backport).
>>>
>>> 2. I verified if the cherry pick for each commit is clean. This was best
>>> effort as some of the patches are in parts of the code that I am less
>>> familiar with. This is recorded in the public spread sheet here:
>>> https://docs.google.com/spreadsheets/d/1lfG2CYQ7W4q3ol
>>> WpOCo6EBAey1WYC8hTRUemHvYPPzY/edit?usp=sharing
>>>
>>> I am going to need help from committers to get these backports committed.
>>> If there are any committers that have some spare cycles, especially if
>>> you
>>> were involved with the initial commit for one of these issues, please
>>> look
>>> at the spreadsheet and volunteer to backport one of the issues.
>>>
>>> As always, please let me know if you have any questions or feel that I
>>> have
>>> missed something.
>>>
>>> Thank you!
>>> Chris Trezzo
>>>
>>> On Mon, Aug 15, 2016 at 10:55 AM, Allen Wittenauer <
>>> a...@effectivemachines.com
>>> > wrote:
>>>
>>> >
>>> > > On Aug 12, 2016, at 8:19 AM, Junping Du  wrote:
>>> > >
>>> > >  In this community, we are so aggressive to drop Java 7 support in
>>> 3.0.x
>>> > release. Here, why we are so conservative to keep releasing new bits to
>>> > support Java 6?
>>> >
>>> > I don't view a group of people putting bug fixes into a micro
>>> > release as particularly conservative.  If a group within the community
>>> > wasn't interested in doing it, 2.6.5 wouldn't be happening.
>>> >
>>> > But let's put the releases into context, because I think it
>>> tells
>>> > a more interesting story.
>>> >
>>> > * hadoop 2.6.x = EOLed JREs (6,7)
>>> > * hadoop 2.7 -> hadoop 2.x = transitional (7,8)
>>> > * hadoop 3.x = JRE 8
>>> > * hadoop 4.x = JRE 9
>>> >
>>> > There are groups of people still using JDK6 and they want bug
>>> > fixes in a maintenance release.  Boom, there's 2.6.x.
>>> >
>>> > Hadoop 3.x has been pushed off for years for "reasons".  So we
>>> > still have releases coming off of branch-2.  If 2.7 had been released
>>> as
>>> > 3.x, this chart would look less weird. But it wasn't thus 2.x has this
>>> > weird wart in the middle of that supports JDK7 and JDK8.  Given the
>>> public
>>> > policy and roadmaps of at least one major vendor at the time of this
>>> > writing, we should expect to see JDK7 support for at least the next two
>>> > years after 3.x appears. Bang, there's 2.x, where x is some large
>>> number.
>>> >
>>> > Then there is the future.  People using JRE 8 want to use newer
>>> > dependencies.  A reasonable request. Some of these dependency updates
>>> won't
>>> > work with JRE 7.   We can't do that in hadoop 2.x in any sort of
>>> compatible
>>> > way without breaking the universe. (Tons of JIRAs on this point.) This
>>> > means we can only do it in 3.x (re: Hadoop Compatibility Guidelines).
>>> > Kapow, there's 3.x
>>> >
>>> > The log4j community has stated that v1 won't work with JDK9. In
>>> > turn, this means we'll need to upgrade to v2 at some point.  Upgrading
>>> to
>>> > v2 will break the log4j properties file (and maybe other things?).
>>> Another
>>> > incompatible change and it likely won't appear until Apache Hadoop v4
>>> > unless someone takes the initiative to fix it before v3 hits store
>>> > shelves.  This makes JDK9 the likely target for Apache Hadoop v4.
>>> >
>>> > Having major release cadences tied to JRE updates isn't
>>> > necessarily a bad thing and definitely forces the community to a)
>>> actually
>>> > stop beating around the bush on majors and b) actually makes it
>>> relatively
>>> > easy to determine what the schedule looks like to some degree.
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > -
>>> > To