[jira] [Created] (HDFS-11633) FSImage load fallback may disable all erasure coding policies

2017-04-07 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-11633:
--

 Summary: FSImage load fallback may disable all erasure coding 
policies 
 Key: HDFS-11633
 URL: https://issues.apache.org/jira/browse/HDFS-11633
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding, namenode
Affects Versions: 3.0.0-alpha3
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Critical


If NameNode fails to load the fsimage in the first NameNode metadata directory, 
it accidentally clears all enabled erasure coding policies in 
ErasureCodingPolicyManager.

Even if the NameNode configures multiple fsimage metadata directory and it 
successfully loads the second one, enabled erasure coding policies are not 
restored.

In the current implementation, we do not have a ErasureCodingPolicyManager 
section in fsimage, so a fsimage reload does not reload ECPM.

The easiest fix before ECPM section is implemented, is don't clear ECPM when 
FSNamesystem is cleared.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-04-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/281/

[Apr 6, 2017 9:24:36 PM] (jlowe) YARN-6288. Exceptions during aggregated log 
writes are mishandled.
[Apr 6, 2017 9:33:16 PM] (xyao) HDFS-11362. StorageDirectory should initialize 
a non-null default
[Apr 6, 2017 11:11:55 PM] (xyao) HDFS-11608. HDFS write crashed with block size 
greater than 2 GB.
[Apr 6, 2017 11:54:43 PM] (mingma) YARN-5797. Add metrics to the node manager 
for cleaning the PUBLIC and
[Apr 6, 2017 11:59:21 PM] (zhz) HADOOP-14276. Add a nanosecond API to 
Time/Timer/FakeTimer. Contributed
[Apr 7, 2017 12:44:47 AM] (rkanter) MAPREDUCE-6201. TestNetworkedJob fails on 
trunk (pbacsko via rkanter)
[Apr 7, 2017 6:12:50 AM] (sunilg) YARN-6258. localBaseAddress for CORS proxy 
configuration is not working




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.tools.TestHadoopArchiveLogsRunner 
   hadoop.metrics2.impl.TestKafkaMetrics 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/281/artifact/out/patch-compile-root.txt
  [136K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/281/artifact/out/patch-compile-root.txt
  [136K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/281/artifact/out/patch-compile-root.txt
  [136K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/281/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [140K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/281/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [512K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/281/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/281/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/281/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/281/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [72K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/281/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/281/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/281/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-04-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/

[Apr 6, 2017 9:24:36 PM] (jlowe) YARN-6288. Exceptions during aggregated log 
writes are mishandled.
[Apr 6, 2017 9:33:16 PM] (xyao) HDFS-11362. StorageDirectory should initialize 
a non-null default
[Apr 6, 2017 11:11:55 PM] (xyao) HDFS-11608. HDFS write crashed with block size 
greater than 2 GB.
[Apr 6, 2017 11:54:43 PM] (mingma) YARN-5797. Add metrics to the node manager 
for cleaning the PUBLIC and
[Apr 6, 2017 11:59:21 PM] (zhz) HADOOP-14276. Add a nanosecond API to 
Time/Timer/FakeTimer. Contributed
[Apr 7, 2017 12:44:47 AM] (rkanter) MAPREDUCE-6201. TestNetworkedJob fails on 
trunk (pbacsko via rkanter)
[Apr 7, 2017 6:12:50 AM] (sunilg) YARN-6258. localBaseAddress for CORS proxy 
configuration is not working




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.ipc.TestRPCWaitForProxy 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing 
   hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.mapred.TestMRTimelineEventHandling 
   hadoop.hdfs.TestNNBench 
   hadoop.tools.TestDistCpSystem 
   hadoop.tools.TestHadoopArchiveLogsRunner 
   hadoop.metrics2.impl.TestKafkaMetrics 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/diff-compile-javac-root.txt
  [184K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/whitespace-tabs.txt
  [1.2M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [392K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [64K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/369/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [88K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-ja

[jira] [Created] (HDFS-11634) Optimize BlockIterator when interating starts in the middle.

2017-04-07 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-11634:
--

 Summary: Optimize BlockIterator when interating starts in the 
middle.
 Key: HDFS-11634
 URL: https://issues.apache.org/jira/browse/HDFS-11634
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.6.5
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko


{{BlockManager.getBlocksWithLocations()}} needs to iterate blocks from a 
randomly selected {{startBlock}} index. It creates an iterator which points to 
the first block and then skips all blocks until {{startBlock}}. It is 
inefficient when DN has multiple storages. Instead of skipping blocks one by 
one we can skip entire storages. Should be more efficient on average.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org