Hi Hai Zhou,

It's a good idea to implement my own reporter, but I think it is not the
best solution.
After all, reporter needs to be set well when starting the cluster. It is
not efficient to update cluster whenever you have a new metric for a new
streaming job.

Anyway, it is still a workaround for now. Thank you!

Best Regards,
Tony Wei


2017-09-26 19:13 GMT+08:00 Hai Zhou <yew...@gmail.com>:

> Hi Tony,
>
> you can consider implementing a reporter, use a trick to convert the
> flink's metrics to the structure that suits your needs.
>
> This is just my personal practice, hoping to help you.
>
> Cheers,
> Hai Zhou
>
>
> 在 2017年9月26日,17:49,Tony Wei <tony19920...@gmail.com> 写道:
>
> Hi,
>
> Recently, I am using PrometheusReporter to monitor every metrics from
> Flink.
>
> I found that the metric name in Prometheus will map to the identifier from
> User Scope and System Scope [1], and the labels will map to Variables [2].
>
> To monitor the same metrics from Prometheus, I would like to use labels
> to differentiate them.
> Under the job/task/operator scope, it words fine to me. However, its not
> convenient to me to monitor partitions' states from Kakfa consumer, because
> I couldn't place partition id like a tag on each metric. All partition
> states like current commit offset will be a unique metric in Prometheus.
> It's hard to use visualization tool such as Grafana to monitor them.
>
> My question is: Is it possible to add tags on Metric, instead of using
> `.addGroup()`?
> If not, will it be a new feature on Flink Metrics in the future? Since I
> am not sure about how other reporters work, I am afraid that it is not a
> good design to just fulfill the requirement on particular reporter.
>
> Please guide and thanks for your help.
>
> Best Regards,
> Tony Wei
>
> [1]: https://ci.apache.org/projects/flink/flink-docs-
> release-1.3/monitoring/metrics.html#scope
> [2]: https://ci.apache.org/projects/flink/flink-docs-
> release-1.3/monitoring/metrics.html#list-of-all-variables
>
>
>

Reply via email to