[jira] [Created] (HDFS-12108) Hdfs tail -f command keeps printing the last line in loop when more data is not available

2017-07-10 Thread Nitiraj Singh Rathore (JIRA)
Nitiraj Singh Rathore created HDFS-12108:


 Summary: Hdfs tail -f command keeps printing the last line in loop 
when more data is not available
 Key: HDFS-12108
 URL: https://issues.apache.org/jira/browse/HDFS-12108
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.3
Reporter: Nitiraj Singh Rathore


I tried to do the simple tail -f expecting that new data will keep appearing on 
console, but I found out that in the absence of new data the last line of the 
file keeps printing again and again. See the below output. For clarification I 
have also pasted the output of cat command for the same file.

bq. [hdfs@c6401 lib]$ hdfs dfs -tail -f 
/ats/active/application_1499594381431_0001/appattempt_1499594381431_0001_01/domainlog-appattempt_1499594381431_0001_01
bq. 
{"id":"Tez_ATS_application_1499594381431_0001","readers":"*","writers":"hive"}
bq. 
{"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
bq. 
{"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
bq. 
{"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
bq. 
{"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
bq. 
bq. [hdfs@c6401 lib]$ hdfs dfs -cat  
/ats/active/application_1499594381431_0001/appattempt_1499594381431_0001_01/domainlog-appattempt_1499594381431_0001_01
bq. 
{"id":"Tez_ATS_application_1499594381431_0001","readers":"*","writers":"hive"}
bq. 
{"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}[hdfs@c6401
 lib]$



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-10 Thread Luigi Di Fraia (JIRA)
Luigi Di Fraia created HDFS-12109:
-

 Summary: "fs" java.net.UnknownHostException when HA NameNode is 
used
 Key: HDFS-12109
 URL: https://issues.apache.org/jira/browse/HDFS-12109
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Affects Versions: 2.8.0
 Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[hadoop@namenode01 ~]$ uname -a
Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 
x86_64 x86_64 x86_64 GNU/Linux
[hadoop@namenode01 ~]$ java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
Reporter: Luigi Di Fraia


After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I a missing something obvious here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12110) libhdfs++: Rebase 8707 branch onto an up to date version of trunk

2017-07-10 Thread James Clampffer (JIRA)
James Clampffer created HDFS-12110:
--

 Summary: libhdfs++: Rebase 8707 branch onto an up to date version 
of trunk
 Key: HDFS-12110
 URL: https://issues.apache.org/jira/browse/HDFS-12110
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: James Clampffer
Assignee: James Clampffer


It's been way too long since this has been done and it's time to start knocking 
down blockers for merging into trunk.  Can most likely just copy/paste the 
libhdfs++ directory into a newer version of master.  Want to track it in a jira 
since it's likely to cause conflicts when pulling the updated branch for the 
first time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-07-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/460/

[Jul 9, 2017 10:02:31 AM] (danieltempleton) YARN-6752. Display reserved 
resources in web UI per application
[Jul 9, 2017 10:27:32 AM] (templedf) YARN-6746. 
SchedulerUtils.checkResourceRequestMatchingNodePartition() is
[Jul 9, 2017 10:34:35 AM] (templedf) YARN-6410. FSContext.scheduler should be 
final (Contributed by Yeliang
[Jul 9, 2017 11:56:09 AM] (naganarasimha_gr) YARN-6428. Queue AM limit is not 
honored in CS always. Contributed by
[Jul 9, 2017 3:58:24 PM] (naganarasimha_gr) YARN-6770. A small mistake in the 
example of TimelineClient. Contributed
[Jul 9, 2017 11:09:12 PM] (yufei) YARN-6764. Simplify the logic in 
FairScheduler#attemptScheduling.




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs-client 
   Possible exposure of partially initialized object in 
org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At 
DFSClient.java:object in 
org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At 
DFSClient.java:[line 2888] 
   org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
SlowDiskReports.java:keySet iterator instead of entrySet iterator At 
SlowDiskReports.java:[line 105] 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to 
return value of called method Dereferenced at 
JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus()
 due to return value of called method Dereferenced at JournalNode.java:[line 
302] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String)
 unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId 
At HdfsServerConstants.java:[line 193] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int)
 unconditionally sets the field force At HdfsServerConstants.java:force At 
HdfsServerConstants.java:[line 217] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean)
 unconditionally sets the field isForceFormat At 
HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean)
 unconditionally sets the field isInteractiveFormat At 
HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line 
237] 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, 
int, HardLink, boolean, File, List) due to return value of called method 
Dereferenced at 
DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File,
 File, int, HardLink, boolean, File, List) due to return value of called method 
Dereferenced at DataStorage.java:[line 1339] 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String,
 long) due to return value of called method Dereferenced at 
NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String,
 long) due to return value of called method Dereferenced at 
NNStorageRetentionManager.java:[line 258] 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, 
BasicFileAttributes) due to return value of called method Dereferenced at 
NNUpgradeUtil.java:org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path,
 BasicFileAttributes) due to return value of called method Dereferenced at 
NNUpgradeUtil.java:[line 133] 
   Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line 
2085] 
   Useless condition:numBlocks == -1 at this point At 
ImageLoaderCurrent.java:[line 727] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 642] 
   
org.apache.hadoop.yarn.server.

[jira] [Created] (HDFS-12111) libhdfs++: Expose HA and Kerberos options for C++ minidfscluster bindings

2017-07-10 Thread James Clampffer (JIRA)
James Clampffer created HDFS-12111:
--

 Summary: libhdfs++: Expose HA and Kerberos options for C++ 
minidfscluster bindings
 Key: HDFS-12111
 URL: https://issues.apache.org/jira/browse/HDFS-12111
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: James Clampffer
Assignee: James Clampffer


Provide an easy way to instantiate the hdfs::MiniCluster object with HA and/or 
Kerberos enabled.  The majority of the existing CI tests should be able to run 
in those environments and a few HA and Kerberos smoke tests can be added as 
part of this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12112) TestBlockManager#testBlockManagerMachinesArray sometimes fails with NPE

2017-07-10 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12112:
--

 Summary: TestBlockManager#testBlockManagerMachinesArray sometimes 
fails with NPE
 Key: HDFS-12112
 URL: https://issues.apache.org/jira/browse/HDFS-12112
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-beta1
 Environment: CDH5.12.0
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


Found the following error:
{quote}
java.lang.NullPointerException: null
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockManagerMachinesArray(TestBlockManager.java:1202)
{quote}
The NPE suggests corruptStorageDataNode in the following code snippet could be 
null.
{code}
for(int i=0; i

[jira] [Created] (HDFS-12113) `hadoop fs -setrep` requries huge amount of memory on client side

2017-07-10 Thread Ruslan Dautkhanov (JIRA)
Ruslan Dautkhanov created HDFS-12113:


 Summary: `hadoop fs -setrep` requries huge amount of memory on 
client side
 Key: HDFS-12113
 URL: https://issues.apache.org/jira/browse/HDFS-12113
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.5, 2.6.0
 Environment: Java 7
Reporter: Ruslan Dautkhanov


{code}
$ hadoop fs -setrep -w 3 /
{code}

was failing with 
{noformat}
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2367)
at 
java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at 
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
at java.lang.StringBuilder.append(StringBuilder.java:132)
at org.apache.hadoop.fs.shell.PathData.getStringForChildPath(PathData.java:305)
at org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:272)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at 
org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
{noformat}

Until hadoop fs cli command's Java heap memory was allowed to grow to 5Gb:
{code}
HADOOP_HEAPSIZE=5000 hadoop fs -setrep -w 3 /
{code}

Notice that this setrep change was done for whole HDFS filesystem.

So looks like there is a dependency on amount of memory used by `hadoop fs 
-setrep` command on how many files total HDFS has? This is not a huge HDFS 
filesystem, I would say even "small" by current standards.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-07-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/371/

[Jul 9, 2017 3:58:24 PM] (naganarasimha_gr) YARN-6770. A small mistake in the 
example of TimelineClient. Contributed
[Jul 9, 2017 11:09:12 PM] (yufei) YARN-6764. Simplify the logic in 
FairScheduler#attemptScheduling.
[Jul 10, 2017 10:53:13 AM] (stevel) HADOOP-14634. Remove jline from main Hadoop 
pom.xml. Contributed by Ray




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration 
   hadoop.hdfs.server.namenode.TestNamenodeCapacityReport 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestLeaderElectorService 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.client.api.impl.TestNMClient 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/371/artifact/out/patch-mvninstall-root.txt
  [620K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/371/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/371/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/371/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/371/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/371/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [1.2M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/371/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/371/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [68K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/371/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [152K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/371/artifact/out/patch-unit-hadoop

[jira] [Created] (HDFS-12114) Incorrect property name to indicate SSL is enabled for HttpFS

2017-07-10 Thread John Zhuge (JIRA)
John Zhuge created HDFS-12114:
-

 Summary: Incorrect property name to indicate SSL is enabled for 
HttpFS
 Key: HDFS-12114
 URL: https://issues.apache.org/jira/browse/HDFS-12114
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: httpfs
Affects Versions: 3.0.0-alpha4
Reporter: John Zhuge
Assignee: John Zhuge


The patch for HDFS-10860 used 2 diffferent property names to indicate SSL is 
enabled for HttpFS:{{hadoop.httpfs.ssl.enabled}} and {{httpfs.ssl.enabled}}. 
The correct one is {{httpfs.ssl.enabled}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12115) Ozone: SCM: Add queryNode RPC Call

2017-07-10 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-12115:
---

 Summary: Ozone: SCM: Add queryNode RPC Call
 Key: HDFS-12115
 URL: https://issues.apache.org/jira/browse/HDFS-12115
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-7240


Add queryNode RPC to Storage container location protocol. This allows 
applications like SCM CLI to get the list of nodes in various states, like 
Healthy, live or Dead.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12116) BlockReportTestBase#blockReport_08 and #blockReport_08 intermittently fail

2017-07-10 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-12116:


 Summary: BlockReportTestBase#blockReport_08 and #blockReport_08 
intermittently fail
 Key: HDFS-12116
 URL: https://issues.apache.org/jira/browse/HDFS-12116
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.22.0
Reporter: Xiao Chen
Assignee: Xiao Chen


This seems to be long-standing, but the failure rate (~10%) is slightly higher 
in dist-test run in using cdh.
In both _08 and _09 tests:
# an attempt is made to make a replica in {{TEMPORARY}}
 state, by {{waitForTempReplica}}.
# Once that's returned, the test goes on to verify block reports shows correct 
pending replication blocks.

But there's a race condition. If the replica is replicated between steps #1 and 
#2, {{getPendingReplicationBlocks}} could return 0 or 1, depending on how many 
replicas are replicated, hence failing the test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org