+1, I agree with your point Chris. It depends on the client application how
they using the hdfs jars in their classpath.
As implementation already supports the compatibility (through protobuf), No
extra code changes required to support new Client + old server.
I feel it will be good to explicit
I think this kind of compatibility issue still could surface for HDFS,
particularly for custom applications (i.e. something not executed via
"hadoop jar" on a cluster node, where the client classes ought to be
injected into the classpath automatically). Running DistCP between 2
clusters of differe
Tsz Wo Nicholas Sze created HDFS-6129:
-
Summary: When a replica is not found for deletion, do not throw
exception.
Key: HDFS-6129
URL: https://issues.apache.org/jira/browse/HDFS-6129
Project: Hado
It makes sense only for YARN today where we separated out the clients. HDFS is
still a monolithic jar so this compatibility issue is kind of invalid there.
+vinod
On Mar 19, 2014, at 1:59 PM, Chris Nauroth wrote:
> I'd like to discuss clarification of part of our compatibility policy.
> Here i
[
https://issues.apache.org/jira/browse/HDFS-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tsz Wo Nicholas Sze resolved HDFS-2282.
---
Resolution: Not A Problem
I believe this is "Not A Problem" anymore. Please feel free
[
https://issues.apache.org/jira/browse/HDFS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tsz Wo Nicholas Sze resolved HDFS-193.
--
Resolution: Cannot Reproduce
Resolving as "Cannot Reproduce". Please feel free to reopen
[
https://issues.apache.org/jira/browse/HDFS-2076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tsz Wo Nicholas Sze resolved HDFS-2076.
---
Resolution: Cannot Reproduce
Resolving as "Cannot Reproduce". Please feel free to reo
[
https://issues.apache.org/jira/browse/HDFS-39?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tsz Wo Nicholas Sze resolved HDFS-39.
-
Resolution: Not A Problem
Anyway, this issue is "Not A Problem" anymore. Resolving ...
> N
[
https://issues.apache.org/jira/browse/HDFS-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Nauroth resolved HDFS-5117.
-
Resolution: Duplicate
I'm resolving this as a duplicate of HDFS-4685. With the release of 2.4.0,
[
https://issues.apache.org/jira/browse/HDFS-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Nauroth resolved HDFS-4685.
-
Resolution: Fixed
Target Version/s: 2.4.0 (was: 3.0.0, 2.4.0)
Hadoop Flags: Revi
[
https://issues.apache.org/jira/browse/HDFS-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Nauroth resolved HDFS-5604.
-
Resolution: Duplicate
> libHDFS: implement hdfsGetAcls and hdfsSetAcl.
> -
[
https://issues.apache.org/jira/browse/HDFS-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Nauroth resolved HDFS-5605.
-
Resolution: Duplicate
> libHDFS: implement hdfsModifyAclEntries, hdfsRemoveAclEntries and
> hdfsR
[
https://issues.apache.org/jira/browse/HDFS-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Nauroth resolved HDFS-5606.
-
Resolution: Duplicate
> libHDFS: implement hdfsRemoveDefaultAcl.
> ---
Chris Nauroth created HDFS-6128:
---
Summary: Implement libhdfs bindings for HDFS ACL APIs.
Key: HDFS-6128
URL: https://issues.apache.org/jira/browse/HDFS-6128
Project: Hadoop HDFS
Issue Type: Imp
I'd like to discuss clarification of part of our compatibility policy.
Here is a link to the compatibility documentation for release 2.3.0:
http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/Compatibility.html#Wire_compatibility
For convenience, here are the specific lines in
Hi Steve,
Thanks for the response. Since writing the original email, I've
received additional information.
WebHDFS does redirect to you the datanode containing the first block
are you are requesting. This can be abused to query data locality
information but it is inefficient.
Since getFileBloc
Arpit Gupta created HDFS-6127:
-
Summary: sLive with webhdfs fails on secure HA cluster with does
not contain valid host port authority error
Key: HDFS-6127
URL: https://issues.apache.org/jira/browse/HDFS-6127
[
https://issues.apache.org/jira/browse/HDFS-5957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Nauroth resolved HDFS-5957.
-
Resolution: Done
I'm going to go ahead and resolve this. I think we have everything we need.
[~
Mit Desai created HDFS-6126:
---
Summary: TestnameNodeMetrics#testCorruptBlock fails intermittently
Key: HDFS-6126
URL: https://issues.apache.org/jira/browse/HDFS-6126
Project: Hadoop HDFS
Issue Type:
[
https://issues.apache.org/jira/browse/HDFS-5996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangmeng resolved HDFS-5996.
Resolution: Fixed
Release Note: someone has discovered this bug and it has been
resolved
>
1. All the specifics of Hadoop's operations are hidden in the source.
That's a get-out clause of OSS, I know, but sometimes it's the clearest.
2. For webhdfs I suspect it picks a local node with the data -you'd have
to experiment to make sure
3. If webhdfs is missing fetaures, I'm s
21 matches
Mail list logo