single taskmanager / job that
uses the StreamingFileSink crashed with the GC overhead limit exceeded error.
I've had a look for advice on handling this error more broadly without luck.
Any suggestions or advice gratefully received.
Best regards,
Mark Harris
The information contained i
rds,
Mark
From: Piotr Nowojski on behalf of Piotr Nowojski
Sent: 22 January 2020 13:29
To: Till Rohrmann
Cc: Mark Harris ; flink-u...@apache.org
; kkloudas
Subject: Re: GC overhead limit exceeded, memory full of DeleteOnExit hooks for
S3a files
Hi,
This is probably a k
e a factor?
Best regards,
Mark
From: Piotr Nowojski
Sent: 27 January 2020 16:16
To: Cliff Resnick
Cc: David Magalhães ; Mark Harris
; Till Rohrmann ;
flink-u...@apache.org ; kkloudas
Subject: Re: GC overhead limit exceeded, memory full of DeleteOnExit hook
something else may be taking up the
taskmanagers memory which didn't make it into that heap dump. I plan to repeat
the analysis on a heapdump created by -XX:+HeapDumpOnOutOfMemoryError shortly.
Best regards,
Mark
From: Piotr Nowojski
Sent: 30 Janua
progress was
being made. Increasing the memory available to the TM seems to have fixed the
problem.
I think the DeleteOnExit problem will mean it needs to be restarted every few
weeks, but that's acceptable for now.
Thanks again,
Mark
From: Mark Harris
Hi Kostas,
Sorry, stupid question: How do I set that for a StreamingFileSink?
Best regards,
Mark
From: Kostas Kloudas
Sent: 03 February 2020 14:58
To: Mark Harris
Cc: Piotr Nowojski ; Cliff Resnick ;
David Magalhães ; Till Rohrmann ;
flink-u...@apache.org
lkFormat?
Best regards,
Mark
From: Kostas Kloudas
Sent: 03 February 2020 15:39
To: Mark Harris
Cc: Piotr Nowojski ; Cliff Resnick ;
David Magalhães ; Till Rohrmann ;
flink-u...@apache.org
Subject: Re: GC overhead limit exceeded, memory full of DeleteOnExit hooks
, or work around
it would be gratefully received.
Best regards,
Mark Harris
--
hivehome.com <http://www.hivehome.com>
Hive | London | Cambridge |
Houston | Toronto
The information contained in or attached to this
email is confidential and intended only for the use of th
nk 1.3.2 and 1.6.1?
Best regards,
Mark
On Thu, 4 Oct 2018 at 14:03, Aljoscha Krettek wrote:
> Hi,
>
> can you check whether AlertEvent actually has a field called "SCHEMA$"?
> You can do that via
> javap path/to/AlertEvent.class
>
> Best,
> Aljoscha
>
heir "Exception" tab in the jobmanager.
Is there something that we need to fix in our setup? Are there any
implications around missing metrics etc?
Best regards,
Mark Harris
--
hivehome.com <http://www.hivehome.com>
Hive | London | Cambridge |
Houston | Toronto
The infor
d help in solving this issue.
>
> Best Regards,
> Dom.
>
>
>
>
> wt., 23 paź 2018 o 13:21 Mark Harris
> napisał(a):
>
>> Hi,
>> We regularly see the following two exceptions in a number of jobs shortly
>> after they have been resumed during our flink cluster
11 matches
Mail list logo