Re: HBase on 4 machine cluster - OutOfMemoryError

2015-07-19 Thread Stephan Ewen
Okay. If you are using very big values, it often helps to tell Flink to reserve less memory for its internal processing. Can you try and set the memory fraction lower, e.g., 0.5 lower. Have a look at the option "taskmanager.memory.fraction" ( https://ci.apache.org/projects/flink/flink-docs-releas

Re: HBase on 4 machine cluster - OutOfMemoryError

2015-07-18 Thread Lydia Ickler
Hi, yes, it is in one row. Each row represents a patient that has values of 20.000 different genes stored in one column family and one value of health status in a second column family. > Am 18.07.2015 um 15:38 schrieb Stephan Ewen : > > This error is in the HBase RPC Service. Apparently the R

Re: HBase on 4 machine cluster - OutOfMemoryError

2015-07-18 Thread Stephan Ewen
This error is in the HBase RPC Service. Apparently the RPC message is very large. Is the data that you request in one row? Am 18.07.2015 00:50 schrieb "Lydia Ickler" : > Hi all, > > I am trying to read a data set from HBase within a cluster application. > The data is about 90MB big. > > When I ru

HBase on 4 machine cluster - OutOfMemoryError

2015-07-17 Thread Lydia Ickler
Hi all, I am trying to read a data set from HBase within a cluster application. The data is about 90MB big. When I run the program on a cluster consisting of 4 machines (8GB RAM) I get the following error on the head-node: 16:57:41,572 INFO org.apache.flink.api.common.io.LocatableInputSplitAs