Re: Avoiding OutOfMemoryError for large batch-jobs

2021-04-26 Thread Thomas Fredriksen(External)
Thank you, this is very informative. We tried reducing the JdbcIO batch size from 1 to 1000, then to 100. In our runs, we no longer see the explicit OOM-error, but we are seeing executor heartbeat timeouts. From what we understand, this is typically caused by OOM-errors also. However, the stag

Re: Question on late data handling in Beam streaming mode

2021-04-26 Thread Tao Li
Thanks folks. This is really informative! From: Kenneth Knowles Reply-To: "user@beam.apache.org" Date: Friday, April 23, 2021 at 9:34 AM To: Reuven Lax Cc: user , Kenneth Knowles , Kelly Smith , Lian Jiang Subject: Re: Question on late data handling in Beam streaming mode Reuven's answer wil

Re: Avoiding OutOfMemoryError for large batch-jobs

2021-04-26 Thread Alexey Romanenko
> On 26 Apr 2021, at 13:34, Thomas Fredriksen(External) > wrote: > > The stack-trace for the OOM: > > 21/04/21 21:40:43 WARN TaskSetManager: Lost task 1.2 in stage 2.0 (TID 57, > 10.139.64.6, executor 3): org.apache.beam.sdk.util.UserCodeException: > java.lang.OutOfMemoryError: GC overhead

Re: Avoiding OutOfMemoryError for large batch-jobs

2021-04-26 Thread Thomas Fredriksen(External)
The stack-trace for the OOM: 21/04/21 21:40:43 WARN TaskSetManager: Lost task 1.2 in stage 2.0 (TID 57, > 10.139.64.6, executor 3): org.apache.beam.sdk.util.UserCodeException: > java.lang.OutOfMemoryError: GC overhead limit exceeded > at > org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeEx

Re: Avoiding OutOfMemoryError for large batch-jobs

2021-04-26 Thread Alexey Romanenko
Hi Thomas, Could you share the stack trace of your OOM and, if possible, the code snippet of your pipeline? Afaik, usually only “large" GroupByKey transforms, caused by “hot keys”, may lead to OOM with SparkRunner. — Alexey > On 26 Apr 2021, at 08:23, Thomas Fredriksen(External) > wrote: >