[ https://issues.apache.org/jira/browse/FLINK-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14721931#comment-14721931 ]
ASF GitHub Bot commented on FLINK-2545: --------------------------------------- Github user ChengXiangLi commented on the pull request: https://github.com/apache/flink/pull/1067#issuecomment-136243437 Nice job, @greghogan , you just pointed out the root cause and the solution. I add the logic to skip latest buckets as @StephanEwen suggested, and add related unit test for this issue. > NegativeArraySizeException while creating hash table bloom filters > ------------------------------------------------------------------ > > Key: FLINK-2545 > URL: https://issues.apache.org/jira/browse/FLINK-2545 > Project: Flink > Issue Type: Bug > Components: Distributed Runtime > Affects Versions: master > Reporter: Greg Hogan > Assignee: Chengxiang Li > > The following exception occurred a second time when I immediately re-ran my > application, though after recompiling and restarting Flink the subsequent > execution ran without error. > java.lang.Exception: The data preparation for task '...' , caused an error: > null > at > org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:465) > at > org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:354) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:581) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.NegativeArraySizeException > at > org.apache.flink.runtime.operators.hash.MutableHashTable.buildBloomFilterForBucket(MutableHashTable.java:1160) > at > org.apache.flink.runtime.operators.hash.MutableHashTable.buildBloomFilterForBucketsInPartition(MutableHashTable.java:1143) > at > org.apache.flink.runtime.operators.hash.MutableHashTable.spillPartition(MutableHashTable.java:1117) > at > org.apache.flink.runtime.operators.hash.MutableHashTable.insertBucketEntry(MutableHashTable.java:946) > at > org.apache.flink.runtime.operators.hash.MutableHashTable.insertIntoTable(MutableHashTable.java:868) > at > org.apache.flink.runtime.operators.hash.MutableHashTable.buildInitialTable(MutableHashTable.java:692) > at > org.apache.flink.runtime.operators.hash.MutableHashTable.open(MutableHashTable.java:455) > at > org.apache.flink.runtime.operators.hash.ReusingBuildSecondHashMatchIterator.open(ReusingBuildSecondHashMatchIterator.java:93) > at > org.apache.flink.runtime.operators.JoinDriver.prepare(JoinDriver.java:195) > at > org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:459) > ... 3 more -- This message was sent by Atlassian JIRA (v6.3.4#6332)