Re: OutOfMemoryError while doing join operation in flink

2018-11-28 Thread Fabian Hueske
set one slot for each task >> manager. >> >> Best, >> Zhijiang >> >> ------ >> 发件人:Akshay Mendole >> 发送时间:2018年11月23日(星期五) 02:54 >> 收件人:trohrmann >> 抄 送:zhijiang ; user ; >>

Re: OutOfMemoryError while doing join operation in flink

2018-11-27 Thread Akshay Mendole
cover this overhead memories, or set one slot for each task > manager. > > Best, > Zhijiang > > -- > 发件人:Akshay Mendole > 发送时间:2018年11月23日(星期五) 02:54 > 收件人:trohrmann > 抄 送:zhijiang ; user ; > Shreesha Mad

Re: OutOfMemoryError while doing join operation in flink

2018-11-23 Thread Ken Krugler
ion may reduce your record size if possible or you can > increase the heap size of task manager container. > > [1] https://issues.apache.org/jira/browse/FLINK-9913 > <https://issues.apache.org/jira/browse/FLINK-9913> > > Best, > Zhijiang >

回复:OutOfMemoryError while doing join operation in flink

2018-11-22 Thread zhijiang
:2018年11月23日(星期五) 02:54 收件人:trohrmann 抄 送:zhijiang ; user ; Shreesha Madogaran 主 题:Re: OutOfMemoryError while doing join operation in flink Hi, Thanks for your reply. I tried running a simple "group by" on just one dataset where few keys are repeatedly occurring (in order of milli

Re: OutOfMemoryError while doing join operation in flink

2018-11-22 Thread Akshay Mendole
ue to some extent by sharing only one >>> serializer for all subpartitions [1], that means we only have one bytes >>> array overhead at most. This issue is covered in release-1.7. >>> Currently the best option may reduce your record size if possible or you >>&g

Re: OutOfMemoryError while doing join operation in flink

2018-11-22 Thread Till Rohrmann
1] https://issues.apache.org/jira/browse/FLINK-9913 >> >> Best, >> Zhijiang >> >> -- >> 发件人:Akshay Mendole >> 发送时间:2018年11月22日(星期四) 13:43 >> 收件人:user >> 主 题:OutOfMemoryError whi

Re: OutOfMemoryError while doing join operation in flink

2018-11-22 Thread Akshay Mendole
- > 发件人:Akshay Mendole > 发送时间:2018年11月22日(星期四) 13:43 > 收件人:user > 主 题:OutOfMemoryError while doing join operation in flink > > Hi, > We are converting one of our pig pipelines to flink using apache beam. > The pig pipeline reads two different data sets

回复:OutOfMemoryError while doing join operation in flink

2018-11-22 Thread zhijiang
, Zhijiang -- 发件人:Akshay Mendole 发送时间:2018年11月22日(星期四) 13:43 收件人:user 主 题:OutOfMemoryError while doing join operation in flink Hi, We are converting one of our pig pipelines to flink using apache beam. The pig pipeline reads two

OutOfMemoryError while doing join operation in flink

2018-11-21 Thread Akshay Mendole
Hi, We are converting one of our pig pipelines to flink using apache beam. The pig pipeline reads two different data sets (R1 & R2) from hdfs, enriches them, joins them and dumps back to hdfs. The data set R1 is skewed. In a sense, it has few keys with lot of records. When we converted the pig