Hi all,

I am trying to read a data set from HBase within a cluster application. 
The data is about 90MB big.

When I run the program on a cluster consisting of 4 machines (8GB RAM) I get 
the following error on the head-node:

16:57:41,572 INFO  org.apache.flink.api.common.io.LocatableInputSplitAssigner   
 - Assigning remote split to host grips5
17:17:26,127 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       
 - DataSource (at createInput(ExecutionEnvironment.java:502) 
(org.apache.flink.addons.hbase.HBaseR$
17:17:26,128 INFO  org.apache.flink.runtime.jobmanager.JobManager               
 - Status of job b768ff76167fa3ea3e4cb3cc3481ba80 (Labeled - ML) changed to 
FAILING.

And within the machine grips5:
16:57:23,769 INFO  org.apache.flink.addons.hbase.TableInputFormat               
 - opening split [1|[grips1:16020]|LUAD+5781|LUAD+7539]
16:57:33,734 WARN  org.apache.hadoop.ipc.RpcClient                              
 - IPC Client (767445418) connection to grips1/130.73.20.14:16020 from hduser: 
unexpected exceptio$
java.lang.OutOfMemoryError: Java heap space
        at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1117)
        at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:727)
16:57:39,969 WARN  org.apache.hadoop.ipc.RpcClient                              
 - IPC Client (767445418) connection to grips1/130.73.20.14:16020 from hduser: 
unexpected exceptio$
java.lang.OutOfMemoryError: Java heap space

and then it just closes the zookeeper…

Do you have a suggestion how to avoid this OutOfMemoryError?
Best regards,
Lydia



Reply via email to