I presume your job/task names contains a space, which is included in the metrics scope?

You can either configure the metric scope such that the job/task ID is included instead, or create a modified version of the StatsDReporter that filters out additional characters(i.e., override #filterCharacters).

When it comes to automatically filtering characters the StatsDReporter is in a bit of a pickle; different backends have different rules for what characters are allowed which also differ with StatsD.
I'm not sure yet what the best solution for this is.

On 21/01/2020 17:18, John Smith wrote:
I think I figured it out. I used netcat to debug. I think the Telegraf StatsD server doesn't support spaces in the stats names.

On Mon, 20 Jan 2020 at 12:19, John Smith <java.dev....@gmail.com <mailto:java.dev....@gmail.com>> wrote:

    Hi, running Flink 1.8

    I'm declaring my metric as such.

    invalidList = getRuntimeContext()
           .getMetricGroup()
           .addGroup("MyMetrics")
           .meter("invalidList", new DropwizardMeterWrapper(new 
com.codahale.metrics.Meter()));

    Then in my code I call.

    invalidList.markEvent();


    On the task nodes I enabled the Influx Telegraf StatsD server. And
    I enabled the task node with.

    metrics.reporter.stsd.class:
    org.apache.flink.metrics.statsd.StatsDReporter
    metrics.reporter.stsd.host: localhost
    metrics.reporter.stsd.port: 8125

    The metrics are being pushed to Elasticsearch. So far I only see
    the Status_JVM_* metrics.

    Do the task specific metrics come from the Job nodes? I have not
    enabled reporting on the Job nodes yet.








Reply via email to