Hanisha Koneru created HDDS-250:
---
Summary: Cleanup ContainerData
Key: HDDS-250
URL: https://issues.apache.org/jira/browse/HDDS-250
Project: Hadoop Distributed Data Store
Issue Type: Improvement
Bharat Viswanadham created HDDS-249:
---
Summary: Fail if multiple SCM IDs on the DataNode and add SCM ID
check after version request
Key: HDDS-249
URL: https://issues.apache.org/jira/browse/HDDS-249
P
Hanisha Koneru created HDDS-248:
---
Summary: Refactor DatanodeContainerProtocol.proto
Key: HDDS-248
URL: https://issues.apache.org/jira/browse/HDDS-248
Project: Hadoop Distributed Data Store
Iss
Shashikant Banerjee created HDDS-247:
Summary: Handle CLOSED_CONTAINER_IO exception in ozoneClient
Key: HDDS-247
URL: https://issues.apache.org/jira/browse/HDDS-247
Project: Hadoop Distributed Data
Welcome Jonathan.
http://hadoop.apache.org/releases.html stated:
"Hadoop is released as source code tarballs with corresponding binary
tarballs for convenience. "
and Andrew Wang said "The binary artifacts (including JARs) are technically
just convenience artifacts" and it seems not an uncommon p
Shashikant Banerjee created HDDS-246:
Summary: Ratis leader should throw BlockNotCommittedException for
uncommitted blocks to Ozone Client
Key: HDDS-246
URL: https://issues.apache.org/jira/browse/HDDS-246
Elek, Marton created HDDS-245:
-
Summary: Handle ContainerReports in the SCM
Key: HDDS-245
URL: https://issues.apache.org/jira/browse/HDDS-245
Project: Hadoop Distributed Data Store
Issue Type: Im
Shashikant Banerjee created HDDS-244:
Summary: PutKey should get executed only if all WriteChunk request
fior the same block complete in Ratis
Key: HDDS-244
URL: https://issues.apache.org/jira/browse/HDDS-244