+1 to what Zhenghua said. You're abusing the metrics system I think.
Rather just do a stream.keyBy().sum() and then write a Sink to do something
with the data -- for example push it to your metrics system if you wish.
However, from experience, many metrics systems don't like that sort of
thing.
So what you want is the counts of every keys ?
Why didn't you use count aggregation?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Ken,
Thanks for your inputs again.
I will wait for Flink guys to come back to me for the suggestion of
implementation of 100 K unique counters.
For time being, I will make the number of counter metric value a
configurable parameter in my application. So, user will know what he is
trying to do.
Hi Gaurav,
I’ve use a few hundred counters before without problems. My concern about >
100K unique counters is that you wind up generating load (and maybe memory
issues) for the JobManager.
E.g. with Hadoop’s metric system trying to go much beyond 1000 counters could
cause significant problems
I want new counter for every key of my windowed stream, And I want the same
counter to get increment when the same key comes multiple times in incoming
event.
So, I will write below code for every incoming event.
getRuntimeContext().getMetricGroup().counter(myKey).inc();
But above code fails when