Pig UDF doesn't seem related.

The stack trace shows the Pig client JVM ran out of memory while computing
input splits. You can try to increase the heap by setting PIG_HEAPSIZE to a
larger than the default (1000). See the "bin/pig" script.





On Wed, Apr 30, 2014 at 5:42 AM, Patcharee Thongtra <
[email protected]> wrote:

> Hi,
>
> How can I increase memory size used by Pig UDF? I got OutOfMemoryError
> exception which was thrown before Pig submitted jobs to Hadoop, see error
> log.
>
> 426405 [JobControl] ERROR org.apache.pig.backend.hadoop23.PigJobControl
>  - Error while trying to run jobs.
> java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
>     at org.apache.pig.backend.hadoop23.PigJobControl.submit(
> PigJobControl.java:130)
>     at org.apache.pig.backend.hadoop23.PigJobControl.run(
> PigJobControl.java:191)
>     at java.lang.Thread.run(Thread.java:662)
>     at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
> MapReduceLauncher$1.run(MapReduceLauncher.java:270)
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:39)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.pig.backend.hadoop23.PigJobControl.submit(
> PigJobControl.java:128)
>     ... 3 more
> Caused by: java.lang.OutOfMemoryError: Java heap space
>     at ucar.nc2.iosp.IospHelper.makePrimitiveArray(IospHelper.java:669)
>     at ucar.nc2.iosp.IospHelper.readDataFill(IospHelper.java:74)
>     at ucar.nc2.iosp.netcdf3.N3raf.readData(N3raf.java:62)
>     at ucar.nc2.iosp.netcdf3.N3iosp.readData(N3iosp.java:496)
>     at ucar.nc2.NetcdfFile.readData(NetcdfFile.java:1894)
>     at ucar.nc2.Variable.reallyRead(Variable.java:856)
>     at ucar.nc2.Variable._read(Variable.java:828)
>     at ucar.nc2.Variable.read(Variable.java:706)
>     at no.uni.computing.pig.io.input.WRFInputFormat.genFileSplits(
> WRFInputFormat.java:106)
>     at no.uni.computing.pig.io.input.WRFInputFormat.getSplits(
> WRFInputFormat.java:57)
>     at org.apache.pig.backend.hadoop.executionengine.
> mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
>     at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(
> JobSubmitter.java:491)
>     at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(
> JobSubmitter.java:508)
>     at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(
> JobSubmitter.java:392)
>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
>     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1491)
>     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
>     at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.
> submit(ControlledJob.java:335)
>     ... 8 more
>
> Patcharee
>
>

Reply via email to