Hi all!

Several users have asked in the past about a Kafka based metrics reporter
which can serve as a natural connector between arbitrary metric storage
systems and a straightforward way to process Flink metrics downstream.

I think this would be an extremely useful addition but I would like to hear
what others in the dev community think about it before submitting a proper
proposal.

There are at least 3 questions to discuss here:


*1. Do we want the Kafka metrics reporter in the Flink repo?*    As it is
much more generic than other metrics reporters already included, I would
say yes. Also as almost everyone uses Flink with Kafka it would be a
natural reporter choice for a lot of users.
*2. How should we handle the Kafka dependency of the connector?*
    I think it would be an overkill to add different Kafka versions here,
so I would use Kafka 2.+ which has the best compatibility and is future
proof
*3. What message format should we use?*
    I would go with JSON for readability and compatibility

There is a relevant JIRA open for this already.
https://issues.apache.org/jira/browse/FLINK-14531

We at Cloudera also promote this as a scalable way of pushing metrics to
other systems so we are very happy to contribute an implementation or
cooperate with others on building it.

Please let me know what you think!

Cheers,
Gyula

Reply via email to