Thank you Chesnay. Good to know there are few wrappers available to get best of both worlds. I may mostly go without piggybacking though to have more control and learning for now, but I will keep an eye for new benefits I will get in future via piggybacking. The UDF point looks like a deal breaker, I will spend some more time understanding it( we can get Flink's runtime using this <https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/api/common/functions/RichFunction.html#getRuntimeContext--> inside the UDF so by 'variables' you must have meant the metrics object that gets passed around)
@Sumit, Adding backend writers(reporter) is as much simple in DropWizard as well. Thanks for bringing it up though. Thanks, Eswar. On Tue, Sep 20, 2016 at 11:33 PM, Chawla,Sumit <sumitkcha...@gmail.com> wrote: > In addition, It supports enabling multiple Reporters. You can have same > data pushed to multiple systems. Plus its very easy to write new reporter > for doing any customization. > > > Regards > Sumit Chawla > > > On Tue, Sep 20, 2016 at 2:10 AM, Chesnay Schepler <ches...@apache.org> > wrote: > >> Hello Eswar, >> >> as far as I'm aware the general structure of the Flink's metric system is >> rather similar to DropWizard. You can use DropWizard metrics by creating a >> simple wrapper, we even ship one for Histograms. Furthermore, you can also >> use DropWizard reporters, you only have to extend the DropWizardReporter >> class, essentially providing a factory method for your reporter. >> >> Using Flinks infrastructure provides the following benefits: >> * better resource usage, as only a single reporter instance per >> taskmanager exists >> * access to system metrics >> * namespace stuff; you cannot access all variables yourselves from a UDF >> without modifying the source of Flink; whether this is an advantage is of >> course dependent on what you are interested in >> >> Regards, >> Chesnay >> >> >> On 20.09.2016 08:29, Eswar Reddy wrote: >> >> Hi, >> >> I see Flink support's built-in metrics to monitor various components of >> Flink. In addition, one can register application specific(custom) metrics >> to Flink's built-in metrics infra. The problem with this is user has to >> develop his custom metrics using Flink's metrics framework/API rather than >> a generic framework such as dropwizard. Alternatively, user can follow >> this >> <http://www.michael-noll.com/blog/2013/11/06/sending-metrics-from-storm-to-graphite/#high-level-approach> >> approach where his dropwizard metrics push code is co-located with actual >> app code within each Task and metrics are directly pushed to a backend >> writer(say, Graphite) from each Task. >> >> In this alternative, I am aware of having to handle mapping spatial >> granularity of Flink's run-time with metrics namespace, but doing it myself >> should not a big effort. Fault-tolerance comes automatically since app code >> and metrics push code are co-located in the Task. Is there anything else >> Flink's metrics infra handles automatically? Based on this I'd weigh using >> good old dropwizard vs Flink specific metrics framework. >> >> Finally, I guess feasibility an automatic dropwizard-to-flinkmetrics >> translation utility can be checked out, but I would like to first >> understand additional benefits of using flink's infra for custom metrics. >> >> Thanks, >> Eswar. >> >> >> >