[jira] [Resolved] (HDFS-11236) Erasure Coding cann't support appendToFile

2016-12-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu resolved HDFS-11236.
---
Resolution: Duplicate

> Erasure Coding cann't support appendToFile
> --
>
> Key: HDFS-11236
> URL: https://issues.apache.org/jira/browse/HDFS-11236
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: gehaijiang
>
> hadoop 3.0.0-alpha1
> $  hdfs erasurecode -getPolicy /ectest/workers
> ErasureCodingPolicy=[Name=RS-DEFAULT-6-3-64k, 
> Schema=[ECSchema=[Codec=rs-default, numDataUnits=6, numParityUnits=3]], 
> CellSize=65536 ]
> $  hadoop fs  -appendToFile  hadoop/etc/hadoop/httpfs-env.sh  /ectest/workers
> appendToFile: Cannot append to files with striped block /ectest/workers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-12-13 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/

[Dec 12, 2016 1:58:27 PM] (stevel) HADOOP-13852 hadoop build to allow hadoop 
version property to be
[Dec 12, 2016 10:55:34 PM] (liuml07) HADOOP-13871.
[Dec 13, 2016 2:11:15 AM] (aajisaka) HDFS-11233. Fix javac warnings related to 
the deprecated APIs after
[Dec 13, 2016 2:21:15 AM] (liuml07) HDFS-11226. cacheadmin, cryptoadmin and 
storagepolicyadmin should
[Dec 13, 2016 5:22:07 AM] (aajisaka) MAPREDUCE-6821. Fix javac warning related 
to the deprecated APIs after




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.token.delegation.web.TestWebDelegationToken 
   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/diff-compile-javac-root.txt
  [164K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/diff-patch-shellcheck.txt
  [28K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/whitespace-tabs.txt
  [1.3M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [128K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [152K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [316K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/254/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-11240) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2016-12-13 Thread Vishwajeet Dusane (JIRA)
Vishwajeet Dusane created HDFS-11240:


 Summary: Remove snapshot version of SDK dependency from Azure Data 
Lake Store File System
 Key: HDFS-11240
 URL: https://issues.apache.org/jira/browse/HDFS-11240
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Vishwajeet Dusane






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2016-12-13 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/185/

[Dec 12, 2016 1:58:27 PM] (stevel) HADOOP-13852 hadoop build to allow hadoop 
version property to be
[Dec 12, 2016 10:55:34 PM] (liuml07) HADOOP-13871.
[Dec 13, 2016 2:11:15 AM] (aajisaka) HDFS-11233. Fix javac warnings related to 
the deprecated APIs after
[Dec 13, 2016 2:21:15 AM] (liuml07) HDFS-11226. cacheadmin, cryptoadmin and 
storagepolicyadmin should
[Dec 13, 2016 5:22:07 AM] (aajisaka) MAPREDUCE-6821. Fix javac warning related 
to the deprecated APIs after




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.token.delegation.web.TestWebDelegationToken 
   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.TestEncryptionZonesWithKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.timelineservice.storage.common.TestRowKeys 
   hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters 
   hadoop.yarn.server.timelineservice.storage.common.TestSeparator 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/185/artifact/out/patch-compile-root.txt
  [164K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/185/artifact/out/patch-compile-root.txt
  [164K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/185/artifact/out/patch-compile-root.txt
  [164K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/185/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [128K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/185/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/185/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [208K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/185/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/185/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/185/artifact/out/patch-unit-hadoo

Question regarding streamer thread implementation at DFSOutputStream.java

2016-12-13 Thread wang shu
Hi all,

I have a question regarding the streamer thread (Line 382-575)
 implementation at DFSOutputStream.java(
http://grepcode.com/file/repo1.maven.org/maven2/org.apache.hadoop/hadoop-hdfs/2.7.1/org/apache/hadoop/hdfs/DFSOutputStream.java#DFSOutputStream.DataStreamer.run%28%29
).


What's the logic of line 468-485 and line 535-548, why it check the packet
is the last one twice and the comments are different, one is "
// wait for all data packets have been successfully acked" and another is "
// wait for the close packet has been acked".  What's the close packet?

My understanding of this is:
If a packet is not the last in block, then the packet is sent directly, and
no need to wait for ACK.
If a packet is the last one, then it will first wait for all previous
packet to be ACKed, and then it will send the last packet in this block(
called close packet, I guess?), and then will need wait the last packet to
be ACKed.

Thanks a lot for your help.


Best,

Shu


[jira] [Created] (HDFS-11241) Add common log rotation settings to YARN Configs

2016-12-13 Thread Madhuvanthi Radhakrishnan (JIRA)
Madhuvanthi Radhakrishnan created HDFS-11241:


 Summary: Add common log rotation settings to YARN Configs
 Key: HDFS-11241
 URL: https://issues.apache.org/jira/browse/HDFS-11241
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Madhuvanthi Radhakrishnan


Add common log rotation settings to YARN.
log4j.appender.RMSUMMARY.MaxFileSize
log4j.appender.RMSUMMARY.MaxBackupIndex



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11242) Add refresh cluster topology operation to dfs admin

2016-12-13 Thread Reid Chan (JIRA)
Reid Chan created HDFS-11242:


 Summary: Add refresh cluster topology operation to dfs admin
 Key: HDFS-11242
 URL: https://issues.apache.org/jira/browse/HDFS-11242
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0-alpha1
Reporter: Reid Chan
Priority: Minor


The network topology and dns to switch mapping are initialized at the start of 
the namenode.
If admin wants to change the topology because of new datanodes added, he has to 
stop and restart namenode(s), otherwise those new added datanodes are squeezed 
under /default-rack.
It is a low frequency operation, but it should be operated appropriately, so 
dfs admin should take the responsibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11243) Add a protocol command from NN to DN for dropping the SPS work and queues

2016-12-13 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-11243:
--

 Summary: Add a protocol command from NN to DN for dropping the SPS 
work and queues 
 Key: HDFS-11243
 URL: https://issues.apache.org/jira/browse/HDFS-11243
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G


This JIRA is for adding a protocol command from Namenode to Datanode for 
dropping SPS work. and Also for dropping in progress queues.

Use case is: when admin deactivated SPS at NN, then internally NN should issue 
a command to DNs for dropping in progress queues as well. This command can be 
packed via heartbeat. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11244) Limit the number satisfyStoragePolicy items at Namenode

2016-12-13 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-11244:
--

 Summary: Limit the number satisfyStoragePolicy items at Namenode
 Key: HDFS-11244
 URL: https://issues.apache.org/jira/browse/HDFS-11244
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G


This JIRA is to provide a provision to limit the number storagePolisySatisfy 
pending queues. If we don't limit this number and if users keep calling more 
and more and if DNs are slow processing machines, then NN sides queues can grow 
up. So, it may be good to have an option to limit incoming requests for 
satisfyStoragePolicy. May be default 10K?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org