Takanobu Asanuma created HDFS-15350:
---
Summary: Set dfs.client.failover.random.order to true as default
Key: HDFS-15350
URL: https://issues.apache.org/jira/browse/HDFS-15350
Project: Hadoop HDFS
Hi,
Seems there are no concerns for now. I created HDFS-15350 addressing it.
If you have any concerns, please comment on the jira.
Thanks,
- Takanobu
2020年4月30日(木) 17:27 Takanobu Asanuma :
>
> Hi,
>
> Currently, the default value of dfs.client.failover.random.order is
> false. If it's true, clie
For more details, see
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/682/
No changes
-1 overall
The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc c
[
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mikhail Pryakhin reopened HDFS-1820:
looks like this is causing HADOOP-17036.
> FTPFileSystem attempts to close the outputstream eve
[
https://issues.apache.org/jira/browse/HDFS-1820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mikhail Pryakhin resolved HDFS-1820.
Resolution: Fixed
a regression introduced in this patch will be fixed as
https://issues.apa
Hadoop devs,
A colleague of mine recently hit a strange issue where zstd compression
codec crashes.
Caused by: java.lang.InternalError: Error (generic)
at
org.apache.hadoop.io.compress.zstd.ZStandardCompressor.deflateBytesDirect(Native
Method)
at
org.apache.hadoop.io.compress.zstd.ZStandardCompre
hemanthboyina created HDFS-15351:
Summary: Blocks Scheduled Count was wrong on Truncate
Key: HDFS-15351
URL: https://issues.apache.org/jira/browse/HDFS-15351
Project: Hadoop HDFS
Issue Type:
For more details, see
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/137/
[May 10, 2020 6:13:30 AM] (Ayush Saxena) HDFS-15250. Setting
`dfs.client.use.datanode.hostname` to true can crash the system because of
unhandled UnresolvedAddressException. Contributed by Ctest.
Hi Wei Chiu,
What is the Hadoop version being used?
Give a check if HADOOP-15822 is there, it had something similar error.
-Ayush
> On 11-May-2020, at 10:11 PM, Wei-Chiu Chuang wrote:
>
> Hadoop devs,
>
> A colleague of mine recently hit a strange issue where zstd compression
> codec crashes.
Thanks for the pointer, it does look similar. However we are roughly on the
latest of branch-3.1 and this fix is in our branch. I'm pretty sure we have
all the zstd fixes.
I believe the libzstd version used is 1.4.4 but need to confirm. I
suspected it's a library version issue because we've been u
If I recall this problem correctly, the root cause is the default zstd
compression block size is 256kb, and Hadoop Zstd compression will attempt
to use the OS platform default compression size, if it is available. The
recommended output size is slightly bigger than input size to account for
header
Simbarashe Dzinamarira created HDFS-15352:
-
Summary: WebHdfsFileSystem does not log the exception that causes
retries
Key: HDFS-15352
URL: https://issues.apache.org/jira/browse/HDFS-15352
Proj
12 matches
Mail list logo