Aha! This is almost certainly it. I remembered thinking something like this
might be a problem. I'll need to change the deployment a bit to add this
(not straightforward to edit the YAML in my case, but thanks!
On Sun, Mar 24, 2019 at 10:01 AM dawid <
apache-flink-user-mailing-list-arch...@davidha
Padarn Wilson-2 wrote
> I am running Fink 1.7.2 on Kubernetes in a setup with task manager and job
> manager separate.
>
> I'm having trouble seeing the metrics from my Flink job in the UI
> dashboard. Actually I'm using the Datadog reporter to expose most of my
> metrics, but latency tracking doe
When downloading the latest 1.7.2 and extracting to (a free) AMZ EC2, the
daemon (./bin/start-cluster.sh) reports 0 task managers, 0 task slots, and 0
avails, out of the box. All jobs fail. All ports are open to all traffic.
Can anyone tell me what I missed?
Thanks David. I cannot see the metrics there, so let me play around a bit
more and make sure they are enabled correctly.
On Sat, Mar 23, 2019 at 9:19 PM David Anderson wrote:
> > I have done this (actually I do it in my flink-conf.yaml), but I am not
> seeing any metrics at all in the Flink UI,
> I have done this (actually I do it in my flink-conf.yaml), but I am not
seeing any metrics at all in the Flink UI,
> let alone the latency tracking. The latency tracking itself does not seem
to be exported to datadog (should it be?)
The latency metrics are job metrics, and are not shown in the F
Because latency tracking is expensive, it is turned off by default. You
turn it on by setting the interval; that looks something like this:
env.getConfig().setLatencyTrackingInterval(1000);
The full set of configuration options is described in the docs:
https://ci.apache.org/projects/flink/fl
Hi User,
I am running Fink 1.7.2 on Kubernetes in a setup with task manager and job
manager separate.
I'm having trouble seeing the metrics from my Flink job in the UI
dashboard. Actually I'm using the Datadog reporter to expose most of my
metrics, but latency tracking does not seem to be exporte
Well.. it turned out I was registering millions of timers by accident,
which was why garbage collection was blowing up. Oops. Thanks for your help
again.
On Wed, Mar 6, 2019 at 9:44 PM Padarn Wilson wrote:
> Thanks a lot for your suggestion. I’ll dig into it and update for the
> mailing list if
Any idea what should I do to overcome this?
On Wed, Mar 20, 2019 at 7:17 PM Avi Levi wrote:
> Hi Andrey,
> I am testing a Filter operator that receives a key from the stream and
> checks if it is a new one or not. if it is new it keeps it in state and
> fire a timer all that is done using the Pr