ChenSammi commented on code in PR #8460:
URL: https://github.com/apache/ozone/pull/8460#discussion_r2095101117


##########
hadoop-hdds/docs/content/design/full-volume-handling.md:
##########
@@ -0,0 +1,65 @@
+---
+title: Full Volume Handling
+summary: Immediately trigger Datanode heartbeat on detecting full volume
+date: 2025-05-12
+jira: HDDS-12929
+status: Design 
+author: Siddhant Sangwan, Sumit Agrawal
+---
+
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+## Summary
+On detecting a full Datanode volume during write, immediately trigger a 
heartbeat containing the latest storage report.
+
+## Problem
+When a Datanode volume is close to full, the SCM may not be immediately aware 
because storage reports are only sent 
+to it every thirty seconds. This can lead to the SCM allocating multiple 
blocks to containers on a full DN volume, 
+causing performance issues when the write fails. The proposal will partly 
solve this problem.
+
+In the future (https://issues.apache.org/jira/browse/HDDS-12151) we plan to 
fail a write if it's going to exceed the min free space boundary in a volume. 
To prevent this from happening often, SCM needs to stop allocating blocks to 
containers on such volumes in the first place.
+
+## Non Goals
+The proposed solution describes the complete solution at a high level, however 
HDDS-12929 will only add the initial Datanode side code for triggering a 
heartbeat on detecting a full volume + throttling logic.
+
+Failing the write if it exceeds the min free space boundary is not discussed 
here.
+
+## Proposed Solution
+
+### What does the Datanode do currently?
+
+In HddsDispatcher, on detecting that the volume being written to is close to 
full, we add a CloseContainerAction for
+that container. This is sent to the SCM in the next heartbeat and makes the 
SCM close that container. This reaction time
+ is OK for a container that is close to full, but not if the volume is close 
to full.
+
+### Proposal
+This is the proposal, explained via a diagram.
+
+![full-volume-handling.png](../../static/full-volume-handling.png)
+
+Throttling is required so the Datanode doesn't cause a heartbeat storm on 
detecting that some volumes are full in multiple write calls.

Review Comment:
   These are the activities that use DN volume space,
   a. create a new container, reserved 5GB
   b. write a new chunk
   c. download an import a container, reserved 10GB
   d. container metadata rocksdb, no reservation
   
   IIRC, when SCM allocates a new pipeline, SCM checks whether DN has enough 
space to hold pipeline metainfo(raft, 1GB), and one container(5GB). A volume 
full report can help SCM quickly aware of this. Maybe a full storage report, 
instead of single volume full report. 
   
   As for carrying the list of containers on this volume in disk full report 
proposal,  because open container has already reserved space in volume,  same 
for container replication import, although disk volume is full,  these open 
containers many still have room for new blocks, as long as the total container 
size doesn't exceed 5GB. So closing all open containers of this disk full 
volume immediately might not a necessary step.  But closing open containers 
whose size beyonds 5GB is one thing we can do.  
   And when disk is full,  DN will and is responsible for not allocate new 
container on this volume and pick volume as target volume for container import. 
   
   So overall my suggestion is 
   a. carry open container state in periodic storage report 
   b. when one disk is full, sent a full storage report immediately with open 
container state to SCM out of cycle. 
   c. make sure these kind of reports are get handled with priority in SCM.  We 
may consider introduce a new port in SCM, for just DN heartbeat with storage 
report. Currently all reports are sent to one single port.  
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@ozone.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@ozone.apache.org
For additional commands, e-mail: issues-h...@ozone.apache.org

Reply via email to