never get watermarks and
we are confused as to what we are seeing and what we should expect
Cam Mach
Software Engineer
E-mail: cammac...@gmail.com
Tel: 206 972 2768
0, it generates 60
sub-tasks for each operator. And so it'll be too much for one slot execute
at least 60 sub-tasks. I am wondering if there is a way we can set number
of generated sub-tasks, different than number of parallelism?
Cam Mach
Software Engineer
E-mail: cammac...@gmail.com
Tel: 20
Hi Zhu,
Look like it's expected. Those are the cases that are happened to our
cluster.
Thanks for your response, Zhu
Cam
On Sun, Aug 11, 2019 at 10:53 PM Zhu Zhu wrote:
> Another possibility is the JM is killed externally, e.g. K8s may kill
> JM/TM if it exceeds the res
my slots, right?
since I have 13 (tasks) x 5 = 65 sub-tasks? What are the configuration that
I missed in order to leverage all of the available slots for my pipelines?
Thanks,
Cam
Hello Flink experts,
We are running Flink under Kubernetes and see that Job Manager
die/restarted whenever Task Manager die/restarted or couldn't get connected
each other. Is there any specific configurations/parameters that we need to
turn on to stop this? Or this is expected?
Thanks,
Cam
wondering what stoping it?
Thanks,
Cam
On Fri, Aug 9, 2019 at 12:21 AM Yu Li wrote:
> Hi Cam,
>
> Which flink version are you using?
>
> Actually I don't think any existing flink release could take usage of the
> write buffer manager natively through some configuration mag
e are now just focusing limiting
memory usage from Flink and RocksDB, so Kubernetes won't kill it.
Any recommendations or advices are greatly appreciated!
Thanks,
On Thu, Aug 8, 2019 at 6:57 AM Yun Tang wrote:
> Hi Cam
>
> I think FLINK-7289 [1] might offer you some insights
Thanks for your response, Biao.
On Wed, Aug 7, 2019 at 11:41 PM Biao Liu wrote:
> Hi Cam,
>
> AFAIK, that's not an easy thing. Actually it's more like a Rocksdb issue.
> There is a document explaining the memory usage of Rocksdb [1]. It might be
> helpful.
>
> Y
Yes, that is correct.
Cam Mach
Software Engineer
E-mail: cammac...@gmail.com
Tel: 206 972 2768
On Wed, Aug 7, 2019 at 8:33 PM Biao Liu wrote:
> Hi Cam,
>
> Do you mean you want to limit the memory usage of RocksDB state backend?
>
> Thanks,
> Biao /'bɪ.aʊ/
>
>
&g
Hello everyone,
What is the most easy and efficiently way to cap RocksDb's memory usage?
Thanks,
Cam
ources is not a
constraints (since we're running Flink on AWS's Kubernetes)
Appreciate if you can help or give us some pointers.
Thanks,
Cam Mach
ources is not a
constraints (since we're running Flink on AWS's Kubernetes)
Appreciate if you can help or give us some pointers.
Thanks,
Cam Mach
concept. How is that different from implementing the
logic inside a FlatMap operator ?
Regards,
CAM
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Window-on-stream-with-timestamps-ascending-by-key-tp5598p5745.html
Sent from the Apache Fli
13 matches
Mail list logo