I'm curious about why this was done:
*** 4452,4462 ****
public final boolean removeAll(Collection<?> c) {
! Objects.requireNonNull(c);
boolean modified = false;
--- 4495,4505 ----
public final boolean removeAll(Collection<?> c) {
! if (c == null) throw new NullPointerException();
boolean modified = false;
*** 4464,4474 ****
public final boolean retainAll(Collection<?> c) {
! Objects.requireNonNull(c);
boolean modified = false;
--- 4507,4517 ----
public final boolean retainAll(Collection<?> c) {
! if (c == null) throw new NullPointerException();
boolean modified = false;
-Brent
On 12/2/13 8:29 AM, Paul Sandoz wrote:
Hi,
http://cr.openjdk.java.net/~psandoz/tl/JDK-8028564-concurrent-resize/webrev/
This patch is contributed by Doug Lea and fixes two issues found in
ConcurrentHashMap:
1) A problem with concurrent resizes; and
2) The skipping of elements when traversing through bins that are trees of
entries.
Both of these issues can result in elements "disappearing" from the map, either
because an update operation such as put failed, or because a contains operation failed to
find the entry in the map.
Issue 1) was causing stream tests to fail intermittently on systems with many
cores (24 to 32) (furthermore the CHM-based JDK test ToArray was also
intermittently failing with less frequency).
After some investigation the cause was distilled down to the use of
ConcurrentHashMap in the F/J task used by Stream.forEachOrdered.
Issue 2) was serendipitously reported on concurrentcy-interest just yesterday
:-)
Both issues have test cases associated with them that have been tuned to
reproduce on systems with low cores i.e. my MacBook!
Paul.