Almost had the same except when using a lot of keys where a single lock is
an issue or a plain map which is still 4 times faster under a medium load
just doing gets (50 threads). Issue is we ignore if the underlying memcache
is thread safe which means we could avoid the composite cache
synchronization when there is no aux cache and be close to a plain map, no?



Le 3 sept. 2017 19:53, "Thomas Vandahl" <t...@apache.org> a écrit :

> On 02.09.17 10:41, Romain Manni-Bucau wrote:
> > Ok, got the confirmation for the reflection fix.
> >
> > Now we lock in CompositeCache.get. Wonder if we could have a lock free
> > MemoryCache implementation, at least for read side of things. Sounds
> doable
> > using ConcurrentMap like algorithms but can require more time than I have
> > ATM to validate it :(. In other words: perf issue we can hit now is we
> > don't scale in reads with thread number since we are synchronized with
> all
> > the MemoryCache we provide OOTB.
>
> I did several experiments with different locking mechanisms and found no
> real improvement over the solution as it is now. You actually need to
> lock on the key to make sure any write operation on that particular key
> has finished before the result is returned. IOW, the overhead effort you
> need to put into the management of this type of key locking is bigger
> than the impact of the synchronization. At least that is what I found out.
>
> Bye, Thomas.
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> For additional commands, e-mail: dev-h...@commons.apache.org
>
>

Reply via email to