Hello everyone,
I have set up the Flink statefun runtime on Kubernetes according to this tutorial https://github.com/apache/flink-statefun-playground/tree/main/deployments/k8s. I developed 12 custom statefun-Functions in Java and deployed them in the same way as shown in the tutorial. There are a
,
Zhanghao Chen
From: John Smith
Sent: Thursday, May 23, 2024 22:40
To: Zhanghao Chen
Cc: Biao Geng ; user
Subject: Re: Would Java 11 cause Getting OutOfMemoryError: Direct buffer memory?
Based on these two settings...
taskmanager.memory.flink.size: 16384m
verwrites JVM's default
>> setting, regardless of the version of JVM.
>>
>> Best,
>> Zhanghao Chen
>> --
>> *From:* John Smith
>> *Sent:* Wednesday, May 22, 2024 22:56
>> *To:* Biao Geng
>> *Cc:* user
>
re is a similar error in this issue
> <https://ververica.zendesk.com/hc/en-us/articles/4413642980498-Direct-buffer-OutOfMemoryError-when-using-Kafka-Connector-in-Flink>
> .
>
> Best,
> Biao Geng
>
>
> John Smith 于2024年5月16日周四 09:01写道:
>
>> I deployed a new
fer-OutOfMemoryError-when-using-Kafka-Connector-in-Flink>
.
Best,
Biao Geng
John Smith 于2024年5月16日周四 09:01写道:
> I deployed a new cluster, same version as my old cluster(1.14.4 ), only
> difference using Java 11 and it seems after a week of usage the below
> exception happens.
>
>
I deployed a new cluster, same version as my old cluster(1.14.4 ), only
difference using Java 11 and it seems after a week of usage the below
exception happens.
The task manager is...
32GB total
And i have the ONLY following memory settings
taskmanager.memory.flink.size: 16384m
taskmanager.memo
ry small data points in the
> collection
>
> I'm able to generate an OutOfMemoryError, and due to the nature of this
> test using simple source and sink, plus not having large data size
> requirements, I suspect this is due to a bug.
>
> I'm running v1.13.2 and
stream.execute_and_collect
> with a map function between, and two very small data points in the collection
>
> I'm able to generate an OutOfMemoryError, and due to the nature of this test
> using simple source and sink, plus not having large data size requirements, I
>
Hi there,
I am running a local test with:
* source = env.from_collection
* sink = datastream.execute_and_collect
with a map function between, and two very small data points in the
collection
I'm able to generate an OutOfMemoryError, and due to the nature of this
test using simple source and
Hi Dan,
Usually broadcast state needs more network buffers, the network buffer used
to exchange data records between tasks would request a portion of direct
memory[1], I think it is possible to get the “Direct buffer memory” OOM
errors in this scenarios. Maybe you can try to increase
taskmanager.m
Hi. My team recently added broadcast state to our Flink jobs. We've
started hitting this OOM "Direct buffer memory" error. Is this a common
problem with broadcast state? Or is it likely a different problem?
Thanks! - Dan
ate: *Wednesday, 12 January 2022 at 7:43 PM
> *To: *dev
> *Cc: *commun...@flink.apache.org ,
> user@flink.apache.org , Hang Ruan <
> ruanhang1...@gmail.com>, Shrinath Shenoy K (sshenoyk) ,
> Jayaprakash Kuravatti (jkuravat) , Krishna Singitam
> (ksingita) , Nabhonil Sinha (nasinha)
apache.org/flink/flink-docs-master/docs/dev/table/sql/queries/joins/
On Wed, 12 Jan 2022 at 11:06, Ronak Beejawat (rbeejawa)
wrote:
> Hi Team,
>
> I was trying to implement flink sql api join with 2 tables it is throwing
> error OutOfMemoryError: Java heap space . PFB screenshot for
end up with
(almost) Cartesian product?
Regards,
Roman
On Wed, Jan 12, 2022 at 11:06 AM Ronak Beejawat (rbeejawa)
wrote:
>
> Hi Team,
>
> I was trying to implement flink sql api join with 2 tables it is throwing
> error OutOfMemoryError: Java heap space . PFB screenshot for flink
- 原始邮件 --
>> *发件人:* "Maciek Próchniak" ;
>> *发送时间:* 2021年4月9日(星期五) 凌晨3:24
>> *收件人:* "太平洋"<495635...@qq.com>;"Arvid Heise";"Yangze
>> Guo";
>> *抄送:* "user";"guowei.mgw"> >;"renqs
* "太平洋"<495635...@qq.com <mailto:495635...@qq.com>>;"Arvid
Heise"mailto:ar...@apache.org>>;"Yangze
Guo"mailto:karma...@gmail.com>>;
*抄送:* "user"mailto:user@flink.apache.org>>;"guowei.mgw"mailto:guowei@gmail.com&g
qq.com>;"Arvid Heise";"Yangze
> Guo";
> *抄送:* "user";"guowei.mgw" >;"renqschn";
> *主题:* Re: 回复: period batch job lead to OutOfMemoryError: Metaspace problem
>
> Hi,
>
> Did you put the clickhouse JDBC drive
le t =
bsTableEnv.sqlQuery(query);
DataStreamhttps://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/memory/mem_trouble/#outofmemoryerror-metaspace
> >
> > Best,
> > Yangze Guo
?0?2--
*??:* "Arvid Heise" ;
*:*?0?22021??4??8??(??) 2:33
*??:*?0?2"Yangze Guo";
*:*?0?2"??"<495635...@qq.com>;"user";"guowei.mgw";"renqschn";
*:*?0?2Re: period batch job lead to OutOfM
= new FlinkKafkaProducer<>();
> exStream.map().addSink(producer);
>
> env.execute("Prediction Program");
> } catch (Exception e) {
> e.printStackTrace();
> }
> i++;
> Thread.sleep(window * 1000);
> }
> }
> }
>
>
>
> -- 原始邮件 ---
cs/deployment/memory/mem_trouble/#outofmemoryerror-metaspace
> >
> > Best,
> > Yangze Guo
> >
> > Best,
> > Yangze Guo
> >
> >
> > On Tue, Apr 6, 2021 at 4:22 PM ?? <495635...@qq.com> wrote:
> > >
> >
> 发件人: "Yangze Guo" ;
> > 发送时间: 2021年4月6日(星期二) 晚上6:35
> > 收件人: "太平洋"<495635...@qq.com>;
> > 抄送: "user";"guowei.mgw";
> > 主题: Re: period batch job lead to OutOfMemoryError: Metaspace problem
> >
> > > I have tried
<495635...@qq.com>;
> 抄送: "user";"guowei.mgw";
> 主题: Re: period batch job lead to OutOfMemoryError: Metaspace problem
>
> > I have tried this method, but the problem still exist.
> How much memory do you configure for it?
>
> > is 21 instances of &qu
--
> 发件人: "Yangze Guo" ;
> 发送时间: 2021年4月6日(星期二) 下午4:32
> 收件人: "太平洋"<495635...@qq.com>;
> 抄送: "user";
> 主题: Re: period batch job lead to OutOfMemoryError: Metaspace problem
>
> I think you can try to increase the JVM metaspace option for
>
--
发件人:
"Yangze Guo"
https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/memory/mem_trouble/#outofmemoryerror-metaspace
Best,
Yangze Guo
Best,
Yangze Guo
I think you can try to increase the JVM metaspace option for
TaskManagers through taskmanager.memory.jvm-metaspace.size. [1]
[1]
https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/memory/mem_trouble/#outofmemoryerror-metaspace
Best,
Yangze Guo
Best,
Yangze Guo
On Tue, Apr
batch job??
read data from s3 by sql??then by some operators and write data to clickhouse
and kafka.
after some times, task-manager quit with OutOfMemoryError: Metaspace.
env??
flink version??1.12.2
task-manager slot count: 5
deployment?? standalone kubernetes session
dependencies??
set one slot for each task
>> manager.
>>
>> Best,
>> Zhijiang
>>
>> --
>> 发件人:Akshay Mendole
>> 发送时间:2018年11月23日(星期五) 02:54
>> 收件人:trohrmann
>> 抄 送:zhijiang ; user ;
>>
cover this overhead memories, or set one slot for each task
> manager.
>
> Best,
> Zhijiang
>
> --
> 发件人:Akshay Mendole
> 发送时间:2018年11月23日(星期五) 02:54
> 收件人:trohrmann
> 抄 送:zhijiang ; user ;
> Shreesha Mad
ion may reduce your record size if possible or you can
> increase the heap size of task manager container.
>
> [1] https://issues.apache.org/jira/browse/FLINK-9913
> <https://issues.apache.org/jira/browse/FLINK-9913>
>
> Best,
> Zhijiang
>
:2018年11月23日(星期五) 02:54
收件人:trohrmann
抄 送:zhijiang ; user ;
Shreesha Madogaran
主 题:Re: OutOfMemoryError while doing join operation in flink
Hi,
Thanks for your reply. I tried running a simple "group by" on just one
dataset where few keys are repeatedly occurring (in order of milli
ue to some extent by sharing only one
>>> serializer for all subpartitions [1], that means we only have one bytes
>>> array overhead at most. This issue is covered in release-1.7.
>>> Currently the best option may reduce your record size if possible or you
>>&g
1] https://issues.apache.org/jira/browse/FLINK-9913
>>
>> Best,
>> Zhijiang
>>
>> --
>> 发件人:Akshay Mendole
>> 发送时间:2018年11月22日(星期四) 13:43
>> 收件人:user
>> 主 题:OutOfMemoryError whi
-
> 发件人:Akshay Mendole
> 发送时间:2018年11月22日(星期四) 13:43
> 收件人:user
> 主 题:OutOfMemoryError while doing join operation in flink
>
> Hi,
> We are converting one of our pig pipelines to flink using apache beam.
> The pig pipeline reads two different data sets
,
Zhijiang
--
发件人:Akshay Mendole
发送时间:2018年11月22日(星期四) 13:43
收件人:user
主 题:OutOfMemoryError while doing join operation in flink
Hi,
We are converting one of our pig pipelines to flink using apache beam. The
pig pipeline reads two
Hi,
We are converting one of our pig pipelines to flink using apache beam.
The pig pipeline reads two different data sets (R1 & R2) from hdfs,
enriches them, joins them and dumps back to hdfs. The data set R1 is
skewed. In a sense, it has few keys with lot of records. When we converted
the pig
it, maybe Stefan knows more details, Ping him for you.
>
> Thanks, vino.
>
> Edward Rojas 于2018年9月7日周五 上午1:22写道:
>
>> Hello all,
>>
>> We are running Flink 1.5.3 on Kubernetes with RocksDB as statebackend.
>> When performing some load testing we got a
于2018年9月7日周五 上午1:22写道:
> Hello all,
>
> We are running Flink 1.5.3 on Kubernetes with RocksDB as statebackend.
> When performing some load testing we got an /OutOfMemoryError: native memory
> exhausted/, causing the job to fail and be restarted.
>
> After the Taskmanager is re
ng Flink 1.5.3 on Kubernetes with RocksDB as statebackend.
> When performing some load testing we got an /OutOfMemoryError: native
> memory
> exhausted/, causing the job to fail and be restarted.
>
> After the Taskmanager is restarted, the job is recovered from a Checkpoint,
> but it
Hello all,
We are running Flink 1.5.3 on Kubernetes with RocksDB as statebackend.
When performing some load testing we got an /OutOfMemoryError: native memory
exhausted/, causing the job to fail and be restarted.
After the Taskmanager is restarted, the job is recovered from a Checkpoint,
but it
so many sqls loaded, akka.framesize has to be set to 200
>> MB to submit the job.
>>
>> When I am trying to run the job with flink 1.6.0, the HTTP-based job
>> submission works perfectly but an OutOfMemoryError is thrown when tasks are
>> being depolyed.
>>
submission works perfectly but an OutOfMemoryError is thrown when tasks are
> being depolyed.
>
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:3236)
> at java.io.ByteArrayOutputStream.grow
I used to runFlink SQL in streaming mode with more than 70 sqls in version
1.4. With so many sqls loaded, akka.framesize has to be set to 200 MB to
submit the job.
When I am trying to run the job with flink 1.6.0, the HTTP-based job
submission works perfectly but an OutOfMemoryError is thrown
Thanks Stephan, I had a MapFunction using Unirest and that was the origin
of the leak.
On Tue, Aug 2, 2016 at 7:36 AM, Stephan Ewen wrote:
> My guess would be that you have a thread leak in the user code.
> More memory will not solve the problem, only push it a bit further away.
>
> On Mon, Aug
My guess would be that you have a thread leak in the user code.
More memory will not solve the problem, only push it a bit further away.
On Mon, Aug 1, 2016 at 9:15 PM, Paulo Cezar wrote:
> Hi folks,
>
>
> I'm trying to run a DataSet program but after around 200k records are
> processed a "java
Hi folks,
I'm trying to run a DataSet program but after around 200k records are
processed a "java.lang.OutOfMemoryError: unable to create new native
thread" stops me.
I'm deploying Flink (via bin/yarn-session.sh) on a YARN cluster with
10 nodes (each with 8 cores) and starting 10 task managers,
Are you facing these issues with the batch or streaming programs?
– Ufuk
On Wed, Mar 16, 2016 at 4:30 PM, Till Rohrmann wrote:
> If the problem is that your JVMs stall too long, then you can also increase
> the akka.ask.timeout configuration value in flink-config.yaml. That will
> also increase
Hi Ravinder,
the log of the TM you've sent is the log of the only TM which has not been
disassociated from the JM. Can it be that you simply stopped the cluster
which results in the disassociation events?
Normally, Flink should kill all processes. If you have some processes
lingering around, then
Hi Till,
Log of JobManager
09:55:31,574 WARN org.apache.hadoop.util.NativeCodeLoader
- Unable to load native-hadoop library for your platform... using
builtin-java classes where applicable
09:55:31,742 INFO org.apache.flink.runtime.jobmanager.JobManager
-
Hi Ravinder,
this should not be the relevant log extract. The log says that the TM is
started on port 49653 and the JM log says that the TM on port 4 is
lost. Would you mind to share the complete JM and TM logs with us?
Cheers,
Till
On Tue, Mar 15, 2016 at 10:54 AM, Ravinder Kaur wrote:
>
Hello Ufuk,
Yes, the same WordCount program is being run.
Kind Regards,
Ravinder Kaur
On Tue, Mar 15, 2016 at 10:45 AM, Ufuk Celebi wrote:
> What do you mean with iteration in this context? Are you repeatedly
> running the same WordCount program for streaming and batch
> respectively?
>
> – Uf
Hi Till,
Following is the log file of one of the taskmanagers
09:55:37,071 INFO org.apache.flink.runtime.util.LeaderRetrievalUtils
- Trying to select the network interface and address to use by
connecting to the leading JobManager.
09:55:37,072 INFO org.apache.flink.runtime.util.LeaderRetr
What do you mean with iteration in this context? Are you repeatedly
running the same WordCount program for streaming and batch
respectively?
– Ufuk
On Tue, Mar 15, 2016 at 10:22 AM, Till Rohrmann wrote:
> Hi Ravinder,
>
> could you tell us what's written in the taskmanager log of the failing
> t
Hi Ravinder,
could you tell us what's written in the taskmanager log of the failing
taskmanager? There should be some kind of failure why the taskmanager
stopped working.
Moreover, given that you have 64 GB of main memory, you could easily give
50GB as heap memory to each taskmanager.
Cheers,
Ti
Hello All,
I'm running a simple word count example using the quickstart package from
the Flink(0.10.1), on an input dataset of 500MB. This dataset is a set of
randomly generated words of length 8.
Cluster Configuration:
Number of machines: 7
Total cores : 25
Memory on each: 64GB
I'm interested
Great to hear :)
On Thu, Oct 1, 2015 at 11:21 AM, Robert Schmidtke
wrote:
> I pulled the current master branch and rebuilt Flink completely anyway.
> Works like a charm.
>
> On Thu, Oct 1, 2015 at 11:11 AM, Maximilian Michels wrote:
>>
>> By the way, you might have to use the "-U" flag to force
I pulled the current master branch and rebuilt Flink completely anyway.
Works like a charm.
On Thu, Oct 1, 2015 at 11:11 AM, Maximilian Michels wrote:
> By the way, you might have to use the "-U" flag to force Maven to
> update its dependencies: mvn -U clean install -DskipTests
>
> On Thu, Oct
By the way, you might have to use the "-U" flag to force Maven to
update its dependencies: mvn -U clean install -DskipTests
On Thu, Oct 1, 2015 at 10:19 AM, Robert Schmidtke
wrote:
> Sweet! I'll pull it straight away. Thanks!
>
> On Thu, Oct 1, 2015 at 10:18 AM, Maximilian Michels wrote:
>>
>>
Sweet! I'll pull it straight away. Thanks!
On Thu, Oct 1, 2015 at 10:18 AM, Maximilian Michels wrote:
> Hi Robert,
>
> Just a quick update: The issue has been resolved in the latest Maven
> 0.10-SNAPSHOT dependency.
>
> Cheers,
> Max
>
> On Wed, Sep 30, 2015 at 3:19 PM, Robert Schmidtke
> wrote
Hi Robert,
Just a quick update: The issue has been resolved in the latest Maven
0.10-SNAPSHOT dependency.
Cheers,
Max
On Wed, Sep 30, 2015 at 3:19 PM, Robert Schmidtke
wrote:
> Hi Max,
>
> thanks for your quick reply. I found the relevant code and commented it out
> for testing, seems to be wor
Hi Max,
thanks for your quick reply. I found the relevant code and commented it out
for testing, seems to be working. Happily waiting for the fix. Thanks again.
Robert
On Wed, Sep 30, 2015 at 1:42 PM, Maximilian Michels wrote:
> Hi Robert,
>
> This is a regression on the current master due to
Hi Robert,
This is a regression on the current master due to changes in the way
Flink calculates the memory and sets the maximum direct memory size.
We introduced these changes when we merged support for off-heap
memory. This is not a problem in the way Flink deals with managed
memory, just -XX:Ma
Hi everyone,
I'm constantly running into OutOfMemoryErrors and for the life of me I
cannot figure out what's wrong. Let me describe my setup. I'm running the
current master branch of Flink on YARN (Hadoop 2.7.0). My job is an
unfinished implementation of TPC-H Q2 (
https://github.com/robert-schmid
t;> 16:57:39,969 WARN org.apache.hadoop.ipc.RpcClient
>> - IPC Client (767445418) connection to grips1/130.73.20.14:16020
>> from hduser: unexpected exceptio$
>> java.lang.OutOfMemoryError: Java heap space
>>
>> and then it just closes the zookeeper…
>>
>> Do you have a suggestion how to avoid this OutOfMemoryError?
>> Best regards,
>> Lydia
>>
>>
>>
>>
>
0.14:16020
> <http://130.73.20.14:16020/> from hduser: unexpected exceptio$
> java.lang.OutOfMemoryError: Java heap space
>
> and then it just closes the zookeeper…
>
> Do you have a suggestion how to avoid this OutOfMemoryError?
> Best regards,
> Lydia
>
>
>
RpcClient$Connection.run(RpcClient.java:727)
> 16:57:39,969 WARN org.apache.hadoop.ipc.RpcClient
> - IPC Client (767445418) connection to grips1/130.73.20.14:16020
> from hduser: unexpected exceptio$
> java.lang.OutOfMemoryError: Java heap space
>
> and then it just closes the zookeeper…
>
> Do you have a suggestion how to avoid this OutOfMemoryError?
> Best regards,
> Lydia
>
>
>
>
space
and then it just closes the zookeeper…
Do you have a suggestion how to avoid this OutOfMemoryError?
Best regards,
Lydia
67 matches
Mail list logo