Hello Averell Based on my experience, using out-of-the-box reporters & collectors need a little more effort! Of course I hadn't experienced all of them, but after reviewing some of them I tried my way: Writing custom reporters to push metrics in ElasticSearch (the available component in our project & very flexible). The custom reporters are able to group metrics with some configurable/dynamic parameters (in my case based on a defined metric-name and jobs): https://github.com/reza-sameei/elasticreporter I think that maybe writing a simple custom reporter will help you :)
On Mon, Sep 3, 2018 at 10:52 AM Averell <lvhu...@gmail.com> wrote: > Hi everyone, > > I am trying to publish some counters and meters from my Flink job, to be > scraped by a Prometheus server. It seems to me that all the metrics that I > am publishing are done at the task level, so that my Prometheus server > needs > to be configured to scrape from many targets (the number equivalent to my > max parallelism). And after that, I need to do aggregation at the > Prometheus > server to get the numbers for my whole job. > > My question is: is it possible to have metrics at the job level? And can I > have one single Prometheus target to scrape the data from? > > I found this > > http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Service-discovery-for-Prometheus-on-YARN-td21988.html > , > which looks like a "no" answer to my question. However, I still hope for > some easy to use solution. > > Thanks and best regards, > Averell > > > > -- > Sent from: > http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/ > -- رضا سامعی | Reza Sameei | Software Developer | 09126662695