Understanding watermark

2020-01-13 Thread Cam Mach
never get watermarks and we are confused as to what we are seeing and what we should expect Cam Mach Software Engineer E-mail: cammac...@gmail.com Tel: 206 972 2768

Re: Why available task slots are not leveraged for pipeline?

2019-08-12 Thread Cam Mach
0, it generates 60 sub-tasks for each operator. And so it'll be too much for one slot execute at least 60 sub-tasks. I am wondering if there is a way we can set number of generated sub-tasks, different than number of parallelism? Cam Mach Software Engineer E-mail: cammac...@gmail.com Tel: 20

Re: Why Job Manager die/restarted when Task Manager die/restarted?

2019-08-12 Thread Cam Mach
Hi Zhu, Look like it's expected. Those are the cases that are happened to our cluster. Thanks for your response, Zhu Cam On Sun, Aug 11, 2019 at 10:53 PM Zhu Zhu wrote: > Another possibility is the JM is killed externally, e.g. K8s may kill > JM/TM if it exceeds the res

Why available task slots are not leveraged for pipeline?

2019-08-11 Thread Cam Mach
my slots, right? since I have 13 (tasks) x 5 = 65 sub-tasks? What are the configuration that I missed in order to leverage all of the available slots for my pipelines? Thanks, Cam

Why Job Manager die/restarted when Task Manager die/restarted?

2019-08-11 Thread Cam Mach
Hello Flink experts, We are running Flink under Kubernetes and see that Job Manager die/restarted whenever Task Manager die/restarted or couldn't get connected each other. Is there any specific configurations/parameters that we need to turn on to stop this? Or this is expected? Thanks, Cam

Re: Capping RocksDb memory usage

2019-08-09 Thread Cam Mach
wondering what stoping it? Thanks, Cam On Fri, Aug 9, 2019 at 12:21 AM Yu Li wrote: > Hi Cam, > > Which flink version are you using? > > Actually I don't think any existing flink release could take usage of the > write buffer manager natively through some configuration mag

Re: Capping RocksDb memory usage

2019-08-08 Thread Cam Mach
e are now just focusing limiting memory usage from Flink and RocksDB, so Kubernetes won't kill it. Any recommendations or advices are greatly appreciated! Thanks, On Thu, Aug 8, 2019 at 6:57 AM Yun Tang wrote: > Hi Cam > > I think FLINK-7289 [1] might offer you some insights

Re: Capping RocksDb memory usage

2019-08-08 Thread Cam Mach
Thanks for your response, Biao. On Wed, Aug 7, 2019 at 11:41 PM Biao Liu wrote: > Hi Cam, > > AFAIK, that's not an easy thing. Actually it's more like a Rocksdb issue. > There is a document explaining the memory usage of Rocksdb [1]. It might be > helpful. > > Y

Re: Capping RocksDb memory usage

2019-08-07 Thread Cam Mach
Yes, that is correct. Cam Mach Software Engineer E-mail: cammac...@gmail.com Tel: 206 972 2768 On Wed, Aug 7, 2019 at 8:33 PM Biao Liu wrote: > Hi Cam, > > Do you mean you want to limit the memory usage of RocksDB state backend? > > Thanks, > Biao /'bɪ.aʊ/ > > &g

Capping RocksDb memory usage

2019-08-07 Thread Cam Mach
Hello everyone, What is the most easy and efficiently way to cap RocksDb's memory usage? Thanks, Cam

link best configurations for Production

2019-07-07 Thread Cam Mach
ources is not a constraints (since we're running Flink on AWS's Kubernetes) Appreciate if you can help or give us some pointers. Thanks, Cam Mach

Flink best configurations for Production

2019-07-06 Thread Cam Mach
ources is not a constraints (since we're running Flink on AWS's Kubernetes) Appreciate if you can help or give us some pointers. Thanks, Cam Mach

Re: Window on stream with timestamps ascending by key

2016-03-24 Thread cam
concept. How is that different from implementing the logic inside a FlatMap operator ? Regards, CAM -- View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Window-on-stream-with-timestamps-ascending-by-key-tp5598p5745.html Sent from the Apache Fli