zentol commented on a change in pull request #13640:
URL: https://github.com/apache/flink/pull/13640#discussion_r505409970



##########
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/util/MetricUtils.java
##########
@@ -214,6 +219,29 @@ static void instantiateNonHeapMemoryMetrics(final 
MetricGroup metricGroup) {
                instantiateMemoryUsageMetrics(metricGroup, () -> 
ManagementFactory.getMemoryMXBean().getNonHeapMemoryUsage());
        }
 
+       @VisibleForTesting
+       static void instantiateMetaspaceMemoryMetrics(final MetricGroup 
parentMetricGroup) {
+               final List<MemoryPoolMXBean> memoryPoolMXBeans = 
ManagementFactory.getMemoryPoolMXBeans()
+                       .stream()
+                       .filter(bean -> "Metaspace".equals(bean.getName()))
+                       .collect(Collectors.toList());
+
+               if (memoryPoolMXBeans.isEmpty()) {
+                       LOG.warn("No memory pool named 'Metaspace' is present. 
The '{}' metric group is not going to be instantiated.", 
METRIC_GROUP_METASPACE_NAME);

Review comment:
       ```suggestion
                        LOG.info("Metaspace metrics will not be exposed because 
no pool named 'Metaspace' could be found. This could be due to the used JVM.");
   ```
   Users typically don't like warning they can't do anything about, and the 
message is relatively cryptic ("what is a metric group? what is the consequence 
of it not being instantiated?"

##########
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/util/MetricUtils.java
##########
@@ -214,6 +219,29 @@ static void instantiateNonHeapMemoryMetrics(final 
MetricGroup metricGroup) {
                instantiateMemoryUsageMetrics(metricGroup, () -> 
ManagementFactory.getMemoryMXBean().getNonHeapMemoryUsage());
        }
 
+       @VisibleForTesting
+       static void instantiateMetaspaceMemoryMetrics(final MetricGroup 
parentMetricGroup) {
+               final List<MemoryPoolMXBean> memoryPoolMXBeans = 
ManagementFactory.getMemoryPoolMXBeans()
+                       .stream()
+                       .filter(bean -> "Metaspace".equals(bean.getName()))
+                       .collect(Collectors.toList());
+
+               if (memoryPoolMXBeans.isEmpty()) {
+                       LOG.warn("No memory pool named 'Metaspace' is present. 
The '{}' metric group is not going to be instantiated.", 
METRIC_GROUP_METASPACE_NAME);
+                       return;
+               }
+
+               final MetricGroup metricGroup = 
parentMetricGroup.addGroup(METRIC_GROUP_METASPACE_NAME);
+               final Iterator<MemoryPoolMXBean> beanIterator = 
memoryPoolMXBeans.iterator();
+
+               final MemoryPoolMXBean firstPool = beanIterator.next();
+               instantiateMemoryUsageMetrics(metricGroup, firstPool::getUsage);
+
+               if (beanIterator.hasNext()) {
+                       LOG.warn("More than one memory pool named '{}' are 
present. Only the first pool was used for instantiating the metric.", 
METRIC_GROUP_METASPACE_NAME);

Review comment:
       I'm fairly sure that this is mostly a theoretical issue; for it to be 
usable via JMX the MBeans require some unique identification, and according to 
the MemoryPoolMXBean javadocs the only unique part of the identifier is in fact 
the name. In other words, multiple pools of this name being present would 
likely be a bug in the JVM.
   We can keep this check, but it should be on debug imo.
   
   It is also a bit odd to use the metric group name as the pool name, when we 
used another string for the actual filtering (as in, move "Metaspace" into a 
constant, use that here, and maybe make METRIC_GROUP_METASPACE_NAME an alias 
for this constant.)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to