Hi Yun, Indeed, that was it: a parallelism set to lower than what my custom partitioner was computing.
Thanks Nicolas On Tue, Aug 6, 2019, at 4:47 AM, Yun Gao wrote: > Hi Nicolas: > > Are you using a custom partitioner? If so, you might need to check if the > Partitioners#partition has returned a value that is greater than or equal to > the parallelism of the downstream tasks. The expected return value should be > in the interval [0, the parallelism of the downstream task). > > Best, > Yun > >> ------------------------------------------------------------------ >> From:Nicolas Lalevée <nicolas.lale...@hibnet.org> >> Send Time:2019 Aug. 5 (Mon.) 22:58 >> To:user <user@flink.apache.org> >> Subject:An ArrayIndexOutOfBoundsException after a few message with Flink >> 1.8.1 >> >> Hi, >> >> I have got a weird error after a few messages. I have first seen this error >> on a deployed Flink cluster 1.7.1. Trying to figure it out, I am trying with >> a local Flink 1.8.1. I still get this ArrayIndexOutOfBoundsException. I >> don't have a precise scenario to reproduce it, but it is happening often. >> Any idea what could be going wrong there ? >> >> The full stack trace: >> >> Exception in thread "main" java.lang.RuntimeException: >> org.apache.flink.runtime.client.JobExecutionException: Job execution failed. >> at >> com.mycompany.myproject.controljob.ControlTopology.main(ControlTopology.java:61) >> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job >> execution failed. >> at >> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146) >> at >> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:638) >> at >> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:123) >> at >> com.mycompany.myproject.controljob.ControlTopology.run(ControlTopology.java:137) >> at >> com.mycompany.myproject.controljob.ControlTopology.main(ControlTopology.java:53) >> Caused by: java.lang.RuntimeException: Index 2 out of bounds for length 2 >> at >> org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:110) >> at >> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89) >> at >> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45) >> at >> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718) >> at >> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696) >> at >> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41) >> at >> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579) >> at >> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554) >> at >> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534) >> at >> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718) >> at >> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696) >> at >> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41) >> at >> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579) >> at >> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554) >> at >> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534) >> at >> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718) >> at >> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696) >> at >> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104) >> at >> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111) >> at >> org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398) >> at >> org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.emitRecord(KafkaFetcher.java:185) >> at >> org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:150) >> at >> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:712) >> at >> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:93) >> at >> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:57) >> at >> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:97) >> at >> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300) >> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711) >> at java.base/java.lang.Thread.run(Thread.java:834) >> Caused by: java.lang.ArrayIndexOutOfBoundsException: Index 2 out of bounds >> for length 2 >> at >> org.apache.flink.runtime.io.network.api.writer.RecordWriter.getBufferBuilder(RecordWriter.java:254) >> at >> org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:177) >> at >> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:162) >> at >> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:128) >> at >> org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107) >> ... 28 more >> >> >> Nicolas >