The job is running out of heap memory, probably because a user function needs a lot of it (the parquet thrift sink?).
You can try to work around it by reducing the amount of managed memory in order to leave more heap space available. On Thu, May 12, 2016 at 6:55 PM, Flavio Pompermaier <pomperma...@okkam.it> wrote: > Hi to all, > running a job that writes parquet-thrift files I had this exception (in a > Task Manager): > > io.netty.channel.nio.NioEventLoop - Unexpected > exception in the selector loop. > java.lang.OutOfMemoryError: Java heap space > 2016-05-12 18:49:11,302 WARN > org.jboss.netty.channel.socket.nio.AbstractNioSelector - Unexpected > exception in the selector loop. > java.lang.OutOfMemoryError: Java heap space > 2016-05-12 18:49:11,302 ERROR > org.apache.flink.runtime.io.disk.iomanager.IOManager - The handler > of the request-complete-callback threw an exception: Java heap space > java.lang.OutOfMemoryError: Java heap space > 2016-05-12 18:49:11,303 ERROR > org.apache.flink.runtime.io.disk.iomanager.IOManager - I/O reading > thread encountered an error: segment has been freed > java.lang.IllegalStateException: segment has been freed > at > org.apache.flink.core.memory.HeapMemorySegment.wrap(HeapMemorySegment.java:85) > at > org.apache.flink.runtime.io.disk.iomanager.SegmentReadRequest.read(AsynchronousFileIOChannel.java:310) > at > org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$ReaderThread.run(IOManagerAsync.java:396) > 2016-05-12 18:49:11,303 ERROR > org.apache.flink.runtime.io.disk.iomanager.IOManager - I/O reading > thread encountered an error: segment has been freed > > > Any idea of what could be the cause? > > Best, > Flavio