Yiqun Lin created HDFS-10594:
Summary: CacheReplicationMonitor should recursively rescan the
path when the inode of the path is directory
Key: HDFS-10594
URL: https://issues.apache.org/jira/browse/HDFS-10594
For more details, see
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/85/
[Jul 4, 2016 2:15:04 PM] (aajisaka) HDFS-10589. Javadoc for HAState#HAState and
HAState#setStateInternal
-1 overall
The following subsystems voted -1:
unit
The following subsystems voted -1 but
w
Anatoli Shein created HDFS-10595:
Summary: libhdfs++: Client Name Protobuf Error
Key: HDFS-10595
URL: https://issues.apache.org/jira/browse/HDFS-10595
Project: Hadoop HDFS
Issue Type: Sub-tas
We have discussed this in the past. I think the single biggest issue is
that HDFS doesn't understand the schema of the data which is stored in
it. So it may not be aware of what compression scheme would be most
appropriate for the application and data.
While it is true that HDFS doens't allow ra
Thanks for your email Robert!
IMHO compression has other effects (pegging CPUs, needing more memory) . If
you enable compression on all blocks, you can't provide uncompressed
performance (its arguable whether compression will always be faster /
slower). Regardless, users are free to compress at th
I think it makes sense to have an AddBlockEvent. It seems like we could
provide something like the block ID, block pool ID, and genstamp, as
well as the inode ID and path of the file which the block was added to.
Clearly, we cannot provide the length, since we don't know how many
bytes the client
Go Go Go! Thanks for all the upgrade work Tsuyoshi!
On Thu, Jun 30, 2016 at 12:03 PM, Tsuyoshi Ozawa wrote:
> Thanks, Andrew.
>
> Based on discussion here, I would like to merge it into *trunk* if
> there are no objection tomorrow.
>
> Thanks,
> - Tsuyoshi
>
> On Wed, Jun 29, 2016 at 12:28 PM, A
Anatoli Shein created HDFS-10596:
Summary: libhdfs++: Implement hdfsFileIsEncrypted
Key: HDFS-10596
URL: https://issues.apache.org/jira/browse/HDFS-10596
Project: Hadoop HDFS
Issue Type: Sub-
Michael Rose created HDFS-10597:
---
Summary: DFSClient hangs if using hedged reads and all but one
eligible replica is down
Key: HDFS-10597
URL: https://issues.apache.org/jira/browse/HDFS-10597
Project:
Lei (Eddy) Xu created HDFS-10598:
Summary: DiskBalancer does not execute multi-steps plan.
Key: HDFS-10598
URL: https://issues.apache.org/jira/browse/HDFS-10598
Project: Hadoop HDFS
Issue Typ
Anu Engineer created HDFS-10599:
---
Summary: DiskBalancer: Execute CLI via Shell
Key: HDFS-10599
URL: https://issues.apache.org/jira/browse/HDFS-10599
Project: Hadoop HDFS
Issue Type: Sub-task
Lei (Eddy) Xu created HDFS-10600:
Summary: PlanCommand#getThrsholdPercentage should not use
throughput value.
Key: HDFS-10600
URL: https://issues.apache.org/jira/browse/HDFS-10600
Project: Hadoop HDFS
[
https://issues.apache.org/jira/browse/HDFS-10593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yuanbo Liu resolved HDFS-10593.
---
Resolution: Not A Problem
> MAX_DIR_ITEMS should not be hard coded since RPC buff size is configurable
13 matches
Mail list logo