jinhyukify commented on code in PR #7076:
URL: https://github.com/apache/hbase/pull/7076#discussion_r2146347709
##########
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryManager.java:
##########
@@ -241,6 +238,15 @@ public float getHeapOccupancyPercent() {
: this.heapOccupancyPercent;
}
+ private boolean isHeapMemoryUsageExceedingLimit(float memStoreFraction,
+ float blockCacheFraction) {
+ int memStorePercent = (int) (memStoreFraction * CONVERT_TO_PERCENTAGE);
+ int blockCachePercent = (int) (blockCacheFraction * CONVERT_TO_PERCENTAGE);
+ int minFreeHeapPercent = (int) (this.minFreeHeapFraction *
CONVERT_TO_PERCENTAGE);
+
+ return memStorePercent + blockCachePercent + minFreeHeapPercent >
CONVERT_TO_PERCENTAGE;
Review Comment:
I also gave this some thought.
At first, I assumed there must be a reason for converting to int before
doing the comparison, since it's done that way in some methods in
[MemoryUtil](https://github.com/apache/hbase/pull/7076/files#diff-6e3b744afc3d42d8708b76312e06419665aac8132f3a96e8307ba252b2f91e12L88-L94)
as well.
Initially, I thought it might be to avoid floating-point precision issues.
However, on closer inspection, I'm not sure if casting to int provides any
real advantage - especially since it introduces potential error due to
truncation.
In effect, this approach ends up truncating values at the third decimal
place, which could lead to subtle changes in behavior for users relying on
specific configuration values.
For instance, with memstore=0.409, blockcache=0.4, and freeHeap=0.2, the
previous logic wouldn’t have flagged this as exceeding memory limits. But with
this change, it might.
What are your thoughts on this? 😄
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]