Hi, What still unclear to me so far is - As I don't see any yet., what would be the fundamental differences between this Kafka reporter and Flink’s existing Kafka producer?
I’ve been thinking of Flink metrics for a while, and the “metric reporter” feels a bit redundant to me. As you may already knew, Flink has been used to process external metrics in various companies. If you think about it, Flink’s own metric system is no different from external ones and actually just another stream source, and metric reporters are just some data sinks writing to external storage, with no guarantee or checkpointing. So instead of adding Kafka or other MQ reporters and worrying about message format (which are already solved by Flink’s sinks), we can generalize and expose Flink’s metrics system to be a simple built-in stream source, and "metric reporters" are just some customized sink tailored for this source. Users may even be able to access and process it in stream environment with data stream api. That give users full flexibility on manipulating Flink metrics with Flink, and it’s more of a “eat your own dogfood” philosophy. This seems too good to be true, and I haven't had time to think of the details. Let me know if I miss anything here. On Mon, Nov 18, 2019 at 09:51 Yun Tang <myas...@live.com> wrote: > Hi all > > Glad to see this topic in community. > We at Alibaba also implemented a kafka metrics reporter and extend it to > other message queues like Alibaba cloud log service [1] half a year ago. > The reason why we not launch a similar discussion is that we previously > thought we only provide a way to report metrics to kafka. Unlike current > supported metrics reporter, e.g. InfluxDB, Graphite, they all have an > easy-to-use data source in grafana to visualize metrics. Even with kafka > metrics reporter, we still need another way to consume data out and work as > a data source for observability platform, and this would be diverse for > different companies. > > I think this is the main concern to include this in a popular open-source > main repo, and I pretty agree with Becket's suggestion to contribute this > as a flink-package and we could offer an end-to-end solution including how > to visualize these metrics data. > > [1] https://www.alibabacloud.com/help/doc-detail/29003.htm > > Best > Yun Tang > > On 11/18/19, 8:19 AM, "Becket Qin" <becket....@gmail.com> wrote: > > Hi Gyula, > > Thanks for bringing this up. It is a useful addition to have a Kafka > metrics reporter. I understand that we already have Prometheus and > DataDog > reporters in the Flink main repo. However, personally speaking, I would > slightly prefer to have the Kafka metrics reporter as an ecosystem > project > instead of in the main repo due to the following reasons: > > 1. To keep core Flink more focused. So in general if a component is > more > relevant to external system rather than Flink, it might be good to > keep it > as an ecosystem project. And metrics reporter seems a good example of > that. > 2. This helps encourage more contributions to Flink ecosystem instead > of > giving the impression that anything in Flink ecosystem must be in Flink > main repo. > 3. To facilitate our ecosystem project authors, we have launched a > website[1] to help the community keep track of and advertise the > ecosystem > projects. It looks a good place to put the Kafka metrics reporter. > > Regarding the message format, while I think use JSON by default is > fine as > it does not introduce much external dependency, I wonder if we should > make > the message format pluggable. Many companies probably already have > their > own serde format for all the Kafka messages. For example, maybe they > would > like to just use an Avro record for their metrics instead of > introducing a > new JSON format. Also in many cases, there could be a lot of metric > messages sent by the Flink jobs. JSON format is less efficient and > might > have too much overhead in that case. > > Thanks, > > Jiangjie (Becket) Qin > > [1] https://flink-packages.org/ > > > On Mon, Nov 18, 2019 at 3:30 AM Konstantin Knauf < > konstan...@ververica.com> > wrote: > > > Hi Gyula, > > > > thank you for proposing this. +1 for adding a KafkaMetricsReporter. > In > > terms of the dependency we could go a similar route as for the > "universal" > > Flink Kafka Connector which to my knowledge always tracks the latest > Kafka > > version as of the Flink release and relies on compatibility of the > > underlying KafkaClient. JSON sounds good to me. > > > > Cheers, > > > > Konstantin > > > > > > > > > > > > On Sun, Nov 17, 2019 at 1:46 PM Gyula Fóra <gyf...@apache.org> > wrote: > > > > > Hi all! > > > > > > Several users have asked in the past about a Kafka based metrics > reporter > > > which can serve as a natural connector between arbitrary metric > storage > > > systems and a straightforward way to process Flink metrics > downstream. > > > > > > I think this would be an extremely useful addition but I would > like to > > hear > > > what others in the dev community think about it before submitting a > > proper > > > proposal. > > > > > > There are at least 3 questions to discuss here: > > > > > > > > > *1. Do we want the Kafka metrics reporter in the Flink repo?* > As it is > > > much more generic than other metrics reporters already included, I > would > > > say yes. Also as almost everyone uses Flink with Kafka it would be > a > > > natural reporter choice for a lot of users. > > > *2. How should we handle the Kafka dependency of the connector?* > > > I think it would be an overkill to add different Kafka > versions here, > > > so I would use Kafka 2.+ which has the best compatibility and is > future > > > proof > > > *3. What message format should we use?* > > > I would go with JSON for readability and compatibility > > > > > > There is a relevant JIRA open for this already. > > > https://issues.apache.org/jira/browse/FLINK-14531 > > > > > > We at Cloudera also promote this as a scalable way of pushing > metrics to > > > other systems so we are very happy to contribute an implementation > or > > > cooperate with others on building it. > > > > > > Please let me know what you think! > > > > > > Cheers, > > > Gyula > > > > > > > > > -- > > > > Konstantin Knauf | Solutions Architect > > > > +49 160 91394525 > > > > > > Follow us @VervericaData Ververica <https://www.ververica.com/> > > > > > > -- > > > > Join Flink Forward <https://flink-forward.org/> - The Apache Flink > > Conference > > > > Stream Processing | Event Driven | Real Time > > > > -- > > > > Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany > > > > -- > > Ververica GmbH > > Registered at Amtsgericht Charlottenburg: HRB 158244 B > > Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, > Ji > > (Tony) Cheng > > > > >