Hello Ilya,

Can you add more info about why and how LT for this test case prints log
message twice?

>From my point, maiking clear() method to throw
UnsupportedOperationException is not right
fix for flaky test issues. A brief search through CLHM led me to a thought
that we just forgot to drop down
LongAdder size when iterating over HashEntry array. We incrementing and
decrementing this
counter on put/remove operations by why not in clear method? Am I right?
So, replacing LongAdder to AtomicLong
sounds good to me too, as it was suggested by Ilya Lantukh. But I can
mistake here.

In general way, I think it's a good case to start thinking about how to get
rid of CLHM. E.g. we can consider this implementaion [1].

[1] https://github.com/ben-manes/concurrentlinkedhashmap

вт, 24 июл. 2018 г. в 16:45, Stanislav Lukyanov <stanlukya...@gmail.com>:

> It seems that we currently use the CLHM primarily as a FIFO cache.
> I see two ways around that.
>
> First, we could implement such LRU/FIFO cache based on another, properly
> supported data structure from j.u.c.
> ConcurrentSkipListMap seems OK. I have a draft implementation of
> LruEvictionPolicy based on it that passes functional tests,
> but I haven’t benchmarked it yet.
>
> Second, Guava has a Cache API with a lot of configuration options that we
> could use (license is Apache, should be OK).
>
> Stan
>
> From: Nikolay Izhikov
> Sent: 24 июля 2018 г. 16:27
> To: dev@ignite.apache.org
> Subject: Re: ConcurrentLinkedHashMap works incorrectly after clear()
>
> Hello, Ilya.
>
> May be we need to proceed with ticket [1] "Get rid of
> org.jsr166.ConcurrentLinkedHashMap"?
>
> Especially, if this class is broken and buggy.
>
> [1] https://issues.apache.org/jira/browse/IGNITE-7516
>
> В Вт, 24/07/2018 в 16:20 +0300, Ilya Lantukh пишет:
> > Thanks for revealing this issue!
> >
> > I don't understand why should we disallow calling clear().
> >
> > One way how it can be re-implemented is:
> > 1. acquire write locks on all segments;
> > 2. clear them;
> > 3. reset size to 0;
> > 4. release locks.
> >
> > Another approach is to calculate inside
> > ConcurrentLinkedHashMap.Segment.clear() how many entries you actually
> > deleted and then call size.addAndGet(...).
> >
> > In both cases you'll have to replace LongAdder with AtomicLong.
> >
> > On Tue, Jul 24, 2018 at 4:03 PM, Ilya Kasnacheev <
> ilya.kasnach...@gmail.com>
> > wrote:
> >
> > > Hello igniters!
> > >
> > > So I was working on a fix for
> > > https://issues.apache.org/jira/browse/IGNITE-9056
> > > The reason for test flakiness turned out our ConcurrentLinkedHashMap
> (and
> > > its tautological cousin GridBoundedConcurrentLinkedHashMap) is broken
> :(
> > >
> > > When you do clear(). its size counter is not updated. So sizex() will
> > > return the old size after clear, and if there's maxCnt set, after
> several
> > > clear()s it will immediately evict entries after they are inserted,
> > > maintaining map size at 0.
> > >
> > > This is scary since indexing internals make intense use of
> > > ConcurrentLinkedHashMaps.
> > >
> > > My suggestion for this fix is to avoid ever calling clear(), making it
> > > throw UnsupportedOperationException and recreating/replacing map
> instead of
> > > clear()ing it. Unless somebody is going to stand up and fix
> > > ConcurrentLinkedHashMap.clear() properly. Frankly speaking I'm afraid
> of
> > > touching this code in any non-trivial way.
> > >
> > > --
> > > Ilya Kasnacheev
> > >
> >
> >
> >
>
> --
--
Maxim Muzafarov

Reply via email to