Hello,

Could someone please help! I'm trying  to publish only these three metrics
per tasknode
Status.JVM.Memory.Heap.Used
Status.JVM.Memory.Heap.Committed
Status.JVM.Memory.NonHeap.Max

But, with my current setting I see all Flink metrics getting published.
Please let me know if I need to provide any other information.

Thank you!


---------- Forwarded message ---------
From: Diwakar Jha <diwakar.n...@gmail.com>
Date: Tue, Feb 15, 2022 at 1:31 PM
Subject: How to get memory specific metrics for tasknodes
To: user <user@flink.apache.org>


Hello,

I'm running Flink 1.11 on AWS EMR using the Yarn application. I'm trying to
access memory metrics(Heap.Max, Heap.Used) per tasknode in CloudWatch. I
have 50 tasknodes and it creates Millions of metrics(including per
operator) though I need only a few metrics per tasknode (Heap.Max,
Heap.Used). It is way too much than my current cloudwatch limit and I also
don't need so many metrics.
Could someone please help me how to get only the tasknode memory specific
metrics ?
I'm referring to this doc :
https://nightlies.apache.org/flink/flink-docs-release-1.7/monitoring/metrics.html#memory

I used the following approach to enable Flink metrics.
1. Enable Flink Metrics
copy /opt/flink-metrics-statsd-x.x.jar into the /lib folder of your Flink
distribution
2.  Add StatsD metric reporter in Flink-conf to send to CloudWatch Agent's
StatsD interface
            metrics.reporters: stsd
            metrics.reporter.stsd.factory.class:
org.apache.flink.metrics.statsd.StatsDReporterFactory
            metrics.reporter.stsd.host: localhost
            metrics.reporter.stsd.port: 8125
3. Setup tasknode scope
metrics.scope.tm: taskmanager
4. setup Cloudwatch agent to publish the metrics
"metrics":{
              "namespace": "CustomeNamespace/FlinkMemoryMetrics",
              "metrics_collected":{
                 "statsd":{
                    "service_address":":8125",
                    "metrics_collection_interval":60,
                    "metrics_aggregation_interval":300
                 }
              }
          },

Thanks!

Reply via email to