On Wed, 24 Jan 2024 00:26:09 GMT, Joshua Cao <d...@openjdk.org> wrote:

> This change mirrors what we did for ConcurrentHashMap in 
> https://github.com/openjdk/jdk/pull/17116. When we add all entries from one 
> map to anther, we should resize that map to the size of the sum of both maps.
> 
> I used the command below to run the benchmarks. I set a high heap to reduce 
> garbage collection noise.
> 
> java -Xms25G -jar benchmarks.jar -p size=100000 -p addSize=100000 -gc true 
> org.openjdk.bench.java.util.HashMapBench
> 
> 
> Before change
> 
> 
> Benchmark            (addSize)        (mapType)  (size)  Mode  Cnt   Score   
> Error  Units
> HashMapBench.putAll     100000         HASH_MAP  100000  avgt    4  22.927 ± 
> 3.170  ms/op
> HashMapBench.putAll     100000  LINKED_HASH_MAP  100000  avgt    4  25.198 ± 
> 2.189  ms/op
> 
> 
> After change
> 
> 
> Benchmark            (addSize)        (mapType)  (size)  Mode  Cnt   Score   
> Error  Units
> HashMapBench.putAll     100000         HASH_MAP  100000  avgt    4  16.780 ± 
> 0.526  ms/op
> HashMapBench.putAll     100000  LINKED_HASH_MAP  100000  avgt    4  19.721 ± 
> 0.349  ms/op
> 
> 
> We see about average time improvements of 26% in HashMap and 20% in 
> LinkedHashMap.

> I don't understand the first part about "_the case where many keys exist in 
> _both_ maps_". The benchmark and the results presented in the PR are for a 
> hash map with 100000 elements into which we insert (i.e. `putAll()`) another 
> 100000 elements. Or am I missing something?

Sorry, @simonis. I meant the situation where one key is present in both maps, 
for many keys. That would cause large resizes while only few entries will be 
actually added.

-------------

PR Comment: https://git.openjdk.org/jdk/pull/17544#issuecomment-1908799138

Reply via email to