[
https://issues.apache.org/jira/browse/LUCENE-5310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Michael McCandless updated LUCENE-5310:
---------------------------------------
Attachment: LUCENE-5310.patch
Arrrgh, you're right Simon. I added an TestSerialMergeScheduler to
show this...
New patch, with another iteration, and a number of cleanups to CMS.
E.g. its merge method is no longer sync'd, its MergeThread class now
only runs one merge. I use a semaphore to restrict running merges.
I also think CMS can deadlock (stall all merges) if you ever try to
decrease the maxMergeCount when too many merges were already running,
because the stall logic would get to a point where no thread would
ever be allowed to unstall. So I changed it (and MergeScheduler) to
require the maxMergeCount up front.
An app that wants to do its own throttling can just set a high
maxMergeCount to CMS, and then throttle itself when
IW.getRunningMergeCount() is too high...
> Merge Threads unnecessarily block on SerialMergeScheduler
> ---------------------------------------------------------
>
> Key: LUCENE-5310
> URL: https://issues.apache.org/jira/browse/LUCENE-5310
> Project: Lucene - Core
> Issue Type: Improvement
> Components: core/index
> Affects Versions: 4.5, 5.0
> Reporter: Simon Willnauer
> Priority: Minor
> Fix For: 4.9, 5.0
>
> Attachments: LUCENE-5310.patch, LUCENE-5310.patch, LUCENE-5310.patch,
> LUCENE-5310.patch, LUCENE-5310.patch, LUCENE-5310.patch
>
>
> I have been working on a high level merge multiplexer that shares threads
> across different IW instances and I came across the fact that
> SerialMergeScheduler actually blocks incoming thread is a merge in going on.
> Yet this blocks threads unnecessarily since we pull the merges in a loop
> anyway. We should use a tryLock operation instead of syncing the entire
> method?
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]