HDFS 9666

2017-01-05 Thread Alberto Ramón
Hi

This patch HDFS 9666  ,
has been lost by a mistake?

"2.7.3 is under release process, changing target-version to 2.7.4." but
nobody change


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-01-05 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/

[Jan 4, 2017 2:46:25 PM] (jlowe) HADOOP-13578. Add Codec for ZStandard 
Compression. Contributed by churro
[Jan 5, 2017 3:17:11 AM] (wangda) YARN-4899. Queue metrics of SLS capacity 
scheduler only activated after
[Jan 5, 2017 5:01:23 AM] (wheat9) HDFS-11280. Allow WebHDFS to reuse HTTP 
connections to NN. Contributed




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.mapreduce.TestMRJobClient 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/artifact/out/diff-compile-javac-root.txt
  [164K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/artifact/out/diff-patch-shellcheck.txt
  [28K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [92K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/277/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-11294) libhdfs++: Segfault in HA failover if DNS lookup for both Namenodes fails

2017-01-05 Thread James Clampffer (JIRA)
James Clampffer created HDFS-11294:
--

 Summary: libhdfs++: Segfault in HA failover if DNS lookup for both 
Namenodes fails
 Key: HDFS-11294
 URL: https://issues.apache.org/jira/browse/HDFS-11294
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


Hit while doing more manual testing on HDFS-11028.

The HANamenodeTracker takes an asio endpoint to figure out what endpoint on the 
other node to try next during a failover.  This is done by passing the element 
at index 0 from endpoints (a std::vector).

When DNS fails the endpoints vector for that node will be empty so the iterator 
returned by endpoints\[0\] is just a null pointer that gets dereferenced and 
causes a segfault.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11295) Check storage remaining instead of node remaining in BlockPlacementPolicyDefault.chooseReplicaToDelete()

2017-01-05 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-11295:
-

 Summary: Check storage remaining instead of node remaining in 
BlockPlacementPolicyDefault.chooseReplicaToDelete()
 Key: HDFS-11295
 URL: https://issues.apache.org/jira/browse/HDFS-11295
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.1
Reporter: Xiao Liang


Currently in BlockPlacementPolicyDefault.chooseReplicaToDelete() the logic for 
choosing replica to delete is to pick the node with the least free 
space(node.getRemaining()), if all hearbeats are within the tolerable heartbeat 
interval.
However, a node may have multiple storages and node.getRemaining() is a sum of 
the remainings of them, if free space of the storage with the block to be 
delete is low, free space of the node could still be high due to other storages 
of the node, finally the storage chosen may not be the storage with least free 
space.
So using storage.getRemaining() to choose a storage with least free space for 
choosing replica to delete may be a better way to balance storage usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-01-05 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/

[Jan 4, 2017 2:46:25 PM] (jlowe) HADOOP-13578. Add Codec for ZStandard 
Compression. Contributed by churro
[Jan 5, 2017 3:17:11 AM] (wangda) YARN-4899. Queue metrics of SLS capacity 
scheduler only activated after
[Jan 5, 2017 5:01:23 AM] (wheat9) HDFS-11280. Allow WebHDFS to reuse HTTP 
connections to NN. Contributed




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.namenode.ha.TestHASafeMode 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapreduce.TestMRJobClient 
   hadoop.mapred.TestMRTimelineEventHandling 

Timed out junit tests :

   
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/artifact/out/patch-compile-root.txt
  [120K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/artifact/out/patch-compile-root.txt
  [120K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/artifact/out/patch-compile-root.txt
  [120K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [200K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [68K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/208/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [92K]
   
https://builds.apache.org/job/h

[jira] [Created] (HDFS-11296) Maintenance state expiry should be a epoch time and not jvm monotonic

2017-01-05 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11296:
-

 Summary: Maintenance state expiry should be a epoch time and not 
jvm monotonic
 Key: HDFS-11296
 URL: https://issues.apache.org/jira/browse/HDFS-11296
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.0.0-alpha1
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy



Currently it is possible to configure an expiry time in milliseconds for a 
DataNode in maintenance state. As per the design, the expiry attribute is an 
absolute time, beyond which NameNode starts to stop the ongoing maintenance 
operation for that DataNode. Internally in the code, this expiry time is read 
and checked against {{Time.monotonicNow()}} making the expiry based on more of 
JVM's runtime, which is very difficult to configure for any external user. The 
goal is to make the expiry time an absolute epoch time, so that its easy to 
configure for external users.

{noformat}
{
"hostName": ,
"port": ,
"adminState": "IN_MAINTENANCE",
"maintenanceExpireTimeInMS": 
}
{noformat}

DatanodeInfo.java
{noformat}
  public static boolean maintenanceNotExpired(long maintenanceExpireTimeInMS) {
return Time.monotonicNow() < maintenanceExpireTimeInMS;
  }
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-9896) WebHDFS API may return invalid JSON

2017-01-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HDFS-9896:
---

> WebHDFS API may return invalid JSON
> ---
>
> Key: HDFS-9896
> URL: https://issues.apache.org/jira/browse/HDFS-9896
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.6.0, 2.6.5
> Environment: FreeBSD 10.2
>Reporter: Alexander Shorin
>
> {code}
> >>> import requests
> >>> resp = 
> >>> requests.get('http://server:5/webhdfs/v1/tmp/test/\x00/not_found.txt?op=GETFILESTATUS')
> >>> resp.content
> '{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  does not exist: /tmp/test/\x00/not_found.txt"}}'
> >>> resp.json()
> Traceback (most recent call last):
>   File "", line 1, in 
>   File 
> "/home/sandbox/project/venv/lib/python2.7/site-packages/requests/models.py", 
> line 800, in json
> self.content.decode(encoding), **kwargs
>   File "/usr/local/lib/python2.7/json/__init__.py", line 338, in loads
> return _default_decoder.decode(s)
>   File "/usr/local/lib/python2.7/json/decoder.py", line 366, in decode
> obj, end = self.raw_decode(s, idx=_w(s, 0).end())
>   File "/usr/local/lib/python2.7/json/decoder.py", line 382, in raw_decode
> obj, end = self.scan_once(s, idx)
> ValueError: Invalid control character at: line 1 column 147 (char 146)
> {code}
> The null byte {{\x00}} should be encoded according JSON rules as {{\u}}. 
> It seems like WebHDFS returns path back as is without any processing breaking 
> the content type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9896) WebHDFS API may return invalid JSON

2017-01-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-9896.
---
Resolution: Duplicate

> WebHDFS API may return invalid JSON
> ---
>
> Key: HDFS-9896
> URL: https://issues.apache.org/jira/browse/HDFS-9896
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.6.0, 2.6.5
> Environment: FreeBSD 10.2
>Reporter: Alexander Shorin
>
> {code}
> >>> import requests
> >>> resp = 
> >>> requests.get('http://server:5/webhdfs/v1/tmp/test/\x00/not_found.txt?op=GETFILESTATUS')
> >>> resp.content
> '{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  does not exist: /tmp/test/\x00/not_found.txt"}}'
> >>> resp.json()
> Traceback (most recent call last):
>   File "", line 1, in 
>   File 
> "/home/sandbox/project/venv/lib/python2.7/site-packages/requests/models.py", 
> line 800, in json
> self.content.decode(encoding), **kwargs
>   File "/usr/local/lib/python2.7/json/__init__.py", line 338, in loads
> return _default_decoder.decode(s)
>   File "/usr/local/lib/python2.7/json/decoder.py", line 366, in decode
> obj, end = self.raw_decode(s, idx=_w(s, 0).end())
>   File "/usr/local/lib/python2.7/json/decoder.py", line 382, in raw_decode
> obj, end = self.scan_once(s, idx)
> ValueError: Invalid control character at: line 1 column 147 (char 146)
> {code}
> The null byte {{\x00}} should be encoded according JSON rules as {{\u}}. 
> It seems like WebHDFS returns path back as is without any processing breaking 
> the content type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: HDFS 9666

2017-01-05 Thread Ravi Prakash
Hi Alberto!

Thanks for your email. I've set the Target version to 2.9.0. I'm sure there
are tons of jiras which are missing the correct Target Version. Usually
tickets make progress only when someone feels compelled enough to do the
work and after being committed, the Fix Version is set.

Thanks
Ravi

On Thu, Jan 5, 2017 at 5:46 AM, Alberto Ramón 
wrote:

> Hi
>
> This patch HDFS 9666  ,
> has been lost by a mistake?
>
> "2.7.3 is under release process, changing target-version to 2.7.4." but
> nobody change
>


[jira] [Resolved] (HDFS-11202) httpfs.sh will not run when temp dir does not exist

2017-01-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HDFS-11202.
---
   Resolution: Duplicate
Fix Version/s: 3.0.0-alpha2

HDFS-10860 does fix this issue.

> httpfs.sh will not run when temp dir does not exist
> ---
>
> Key: HDFS-11202
> URL: https://issues.apache.org/jira/browse/HDFS-11202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
>
> From {{httpfs-localhost.2016-12-04.log}}:
> {noformat}
> INFO: ERROR: S01: Dir 
> [/Users/jzhuge/hadoop2/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/temp] 
> does not exist
> Dec 04, 2016 7:04:46 PM org.apache.catalina.core.StandardContext listenerStart
> SEVERE: Exception sending context initialized event to listener instance of 
> class org.apache.hadoop.fs.http.server.HttpFSServerWebApp
> java.lang.RuntimeException: org.apache.hadoop.lib.server.ServerException: 
> S01: Dir 
> [/Users/jzhuge/hadoop2/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/temp] 
> does not exist
> at 
> org.apache.hadoop.lib.servlet.ServerWebApp.contextInitialized(ServerWebApp.java:161)
> at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
> at 
> org.apache.catalina.core.StandardContext.start(StandardContext.java:4779)
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
> at 
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)
> at 
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
> at 
> org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507)
> at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)
> at 
> org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)
> at 
> org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
> at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)
> at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)
> at 
> org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)
> at 
> org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
> at 
> org.apache.catalina.core.StandardService.start(StandardService.java:525)
> at 
> org.apache.catalina.core.StandardServer.start(StandardServer.java:761)
> at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
> at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
> Caused by: org.apache.hadoop.lib.server.ServerException: S01: Dir 
> [/Users/jzhuge/hadoop2/hadoop-dist/target/hadoop-3.0.0-alpha2-SNAPSHOT/temp] 
> does not exist
> at org.apache.hadoop.lib.server.Server.verifyDir(Server.java:400)
> at org.apache.hadoop.lib.server.Server.init(Server.java:349)
> at 
> org.apache.hadoop.fs.http.server.HttpFSServerWebApp.init(HttpFSServerWebApp.java:100)
> at 
> org.apache.hadoop.lib.servlet.ServerWebApp.contextInitialized(ServerWebApp.java:158)
> ... 24 more
> {noformat}
> Create the temp dir manually, httpfs.sh works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11297) hadoop-7285-power

2017-01-05 Thread xlsong (JIRA)
xlsong created HDFS-11297:
-

 Summary: hadoop-7285-power
 Key: HDFS-11297
 URL: https://issues.apache.org/jira/browse/HDFS-11297
 Project: Hadoop HDFS
  Issue Type: Task
  Components: erasure-coding
Affects Versions: HDFS-7285
 Environment: power
Reporter: xlsong
 Fix For: HDFS-7285


hadoop-7285-power



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org