Sean Chow created HDFS-14476:
Summary: lock too long when fix inconsistent blocks between disk
and in-memory
Key: HDFS-14476
URL: https://issues.apache.org/jira/browse/HDFS-14476
Project: Hadoop HDFS
Hanisha Koneru created HDDS-1496:
Summary: readChunkFromContainer() should only read the required
part of chunk file
Key: HDDS-1496
URL: https://issues.apache.org/jira/browse/HDDS-1496
Project: Hadoop
Aravindan Vijayan created HDDS-1494:
---
Summary: Improve logging in client to troubleshoot container not
found errors
Key: HDDS-1494
URL: https://issues.apache.org/jira/browse/HDDS-1494
Project: Hadoo
CR Hota created HDFS-14475:
--
Summary: RBF: Expose router security enabled status on the UI
Key: HDFS-14475
URL: https://issues.apache.org/jira/browse/HDFS-14475
Project: Hadoop HDFS
Issue Type: Sub-
Aravindan Vijayan created HDDS-1493:
---
Summary: Download and Import Container replicator fails.
Key: HDDS-1493
URL: https://issues.apache.org/jira/browse/HDDS-1493
Project: Hadoop Distributed Data Sto
Aravindan Vijayan created HDDS-1492:
---
Summary: Generated chunk size name too long. (Causes
Runtimeexception)
Key: HDDS-1492
URL: https://issues.apache.org/jira/browse/HDDS-1492
Project: Hadoop Distr
It seems as though ebugs is running a static code analyzer across all of
hadoop and creating a new JIRA for every issue it finds (15 JIRAs in the
past 3 hours). While this could potentially be useful, I don't think it's a
good idea at all to file a new JIRA for every single issue that is found.
It
eBugs created HDFS-14474:
Summary: PeerCache.close() throws a RuntimeException when it is
interrupted
Key: HDFS-14474
URL: https://issues.apache.org/jira/browse/HDFS-14474
Project: Hadoop HDFS
Issue
For more details, see
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1128/
[May 5, 2019 8:08:09 AM] (surendralilhore) HDFS-14438. Fix typo in
OfflineEditsVisitorFactory. Contributed by
[May 5, 2019 11:03:52 AM] (surendralilhore) HDFS-14372. NPE while DN is
shutting down. Contrib
eBugs in Cloud Systems created HDFS-14473:
-
Summary: BlockReceiver.receiveBlock() throws an IOException when
interrupted
Key: HDFS-14473
URL: https://issues.apache.org/jira/browse/HDFS-14473
P
eBugs in Cloud Systems created HDFS-14472:
-
Summary: FSImageHandler.getPath() throws a FileNotFoundException
when the path is malformed
Key: HDFS-14472
URL: https://issues.apache.org/jira/browse/HDFS-14472
eBugs in Cloud Systems created HDFS-14471:
-
Summary: FSDirectory.resolveDotInodesPath() throws
FileNotFoundException when the path is malformed
Key: HDFS-14471
URL: https://issues.apache.org/jira/browse/HD
eBugs in Cloud Systems created HDFS-14470:
-
Summary: DataNode.startDataNode() throws a DiskErrorException when
the configuration has wrong values
Key: HDFS-14470
URL: https://issues.apache.org/jira/browse/
eBugs in Cloud Systems created HDFS-14469:
-
Summary: FsDatasetImpl() throws a DiskErrorException when the
configuration has wrong values
Key: HDFS-14469
URL: https://issues.apache.org/jira/browse/HDFS-1446
eBugs in Cloud Systems created HDFS-14468:
-
Summary: StorageLocationChecker methods throw DiskErrorExceptions
when the configuration has wrong values
Key: HDFS-14468
URL: https://issues.apache.org/jira/bro
eBugs in Cloud Systems created HDFS-14467:
-
Summary: DatasetVolumeChecker() throws DiskErrorException when the
configuration has wrong values
Key: HDFS-14467
URL: https://issues.apache.org/jira/browse/HDFS
For more details, see
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/313/
No changes
-1 overall
The following subsystems voted -1:
findbugs hadolint pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac
Thanks the answers Eric Yang, I think we have similar view about how the
releases are working and what you wrote is exactly the reason why I
prefer the current method (docker image creation from separated branch)
instead of the proposed one (create images from maven).
1. Not all the branches can b
19 matches
Mail list logo