Re: owasp-dependency-check is flagging flink 1.13 for scala 2.12.7

2021-07-03 Thread Debraj Manna
Thanks again for replying. Can you please provide a bit more explanation about the flink-hadoop-fs? It is coming from flink-streaming. The relevant dependency tree looks like below. How can I use a different version of hadoop in this case? +- org.apache.flink:flink-streaming-java_2.12:jar:1.13.1:

Re: How to calculate how long an event stays in flink?

2021-07-03 Thread JING ZHANG
Hi Xiuming, > However, I am not sure if this option meets my need - is it possible to obtain only the whole time spent between the source and the sink, without the detailed time spent on each operator? In framework, there is no possible to skip operators on the path. Just like `watermark`, `Latenc

Re: Corrupted unaligned checkpoints in Flink 1.11.1

2021-07-03 Thread Alexander Filipchik
Bumping it up, any known way to catch it if it happens again ? Any logs we should enable? Sent via Superhuman iOS On Thu, Jun 17 2021 at 7:52 AM, Alexander Filipchik wrote: > Did some more digging. > 1) is not an option as we are not doing any clean

Re: owasp-dependency-check is flagging flink 1.13 for scala 2.12.7

2021-07-03 Thread Chesnay Schepler
The Kafka one is incorrect because the 1.13.1 connector relies on Kafka 2.4.1. Whether the hadoop-fs ones are relevant for you depends entirely on which Hadoop version you are using, because we expect the user to provide Hadoop (and you can use later and more secure versions if you wish). IOW

Re: owasp-dependency-check is flagging flink 1.13 for scala 2.12.7

2021-07-03 Thread Debraj Manna
Thanks for replying. But I am also observing the following being flagged *flink-hadoop-fs-1.13.1* - *CVE-2016-5001 * - *CVE-2017-3161 * - *CVE-2017-3162

Re: Memory Usage - Total Memory Usage on UI and Metric

2021-07-03 Thread bat man
Thanks Ken for the info.This is something which I have done when running spark batch jobs. However, in this case I really want to understand if there is anything wrong with the job itself. Is the flink kafka consumer or some other piece needs more memory than I am allocating. Hemant On Fri, Jul 2