Hi, Measuring latency is tricky and you have to be careful about what you measure. Aggregations like window operators make things even more difficult because you need to decide which timestamp(s) to forward (smallest?, largest?, all?) Depending on the operation, the measurement code might even add to the overall latency. Also, the clock of the nodes in your cluster might not be totally in sync.
Best, Fabian 2018-06-26 4:00 GMT+02:00 antonio saldivar <ansal...@gmail.com>: > Thank you very much > > I already did #2 but ate the moment i print te output as i am using a > trigger alert and evaluete the window it replace me the toString values to > null or 0 and only prints the ones saved in my accumulator and the keyBy > value > > On Mon, Jun 25, 2018, 9:22 PM Hequn Cheng <chenghe...@gmail.com> wrote: > >> Hi antonio, >> >> I see two options to solve your problem. >> 1. Enable the latency tracking[1]. But you have to pay attention to it's >> mechanism, for example, a) the sources only *periodically* emit a >> special record and b) the latency markers are not accounting for the time >> user records spend in operators as they are bypassing them. >> 2. Add a time field to each of your record. Each time a record comes in >> from the source, write down the time(t1), so that we can get the latency at >> sink(t2) by t2 - t1. >> >> Hope this helps. >> Hequn >> >> [1] https://ci.apache.org/projects/flink/flink-docs- >> master/monitoring/metrics.html#latency-tracking >> >> On Tue, Jun 26, 2018 at 5:23 AM, antonio saldivar <ansal...@gmail.com> >> wrote: >> >>> Hello >>> >>> I am trying to measure the latency of each transaction traveling across >>> the system as a DataSource I have a Kafka consumer and I would like to >>> measure the time that takes from the Source to Sink. Does any one has an >>> example?. >>> >>> Thank you >>> Best Regards >>> >> >>