[ 
https://issues.apache.org/jira/browse/SOLR-5681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5681:
-------------------------------

    Attachment: SOLR-5681-2.patch

bq. This patch still has the new createCollection method in 
CollectionAdminRequest. Please remove that.
Done.

{code}
private static final boolean DEBUG = false;
{code}
Removed

bq. Should we rename processedZkTasks to runningZkTasks? 
Done.

bq. Let's document the purpose of each of the sets/maps we've introduced such 
as completedTasks, processedZkTasks, runningTasks, collectionWip as a code 
comment.
Done

bq. I think we should use use the return value of 
collectionWip.add(collectionName) as a fail-safe and throw an exception if it 
ever returns false.
There's no uncaught exception or an exit point where this might not be reset. 
It would only not be reset in case the Overseer itself goes down but then 
there's nothing that stops a new Overseer from picking up a task that has not 
completely processed by an older Overseer. I don't think we need to check for 
that. Also, the only thread that adds/checks is the main thread.

bq. we should have debug level logging on items in our various data structures
Done.

bq. We can improve MultiThreadedOCPTest.testTaskExclusivity by sending a shard 
split for shard1_0 as the third collection action
Firing async calls doesn't guarantee the order of task execution. Sending SPLIT 
for shard1, followed by one for shard1_0 might lead to a failed test if split 
for shard1_0 get's picked up before splitting shard1.

bq. There are still formatting problems in Overseer.Stats.success, error, time 
methods
Weird as I don't see any in my IDE. Hopefully this patch has it right.

I’m fixing to get maxParallelThreads come from cluster props i.e. configurable.

> Make the OverseerCollectionProcessor multi-threaded
> ---------------------------------------------------
>
>                 Key: SOLR-5681
>                 URL: https://issues.apache.org/jira/browse/SOLR-5681
>             Project: Solr
>          Issue Type: Improvement
>          Components: SolrCloud
>            Reporter: Anshum Gupta
>            Assignee: Anshum Gupta
>         Attachments: SOLR-5681-2.patch, SOLR-5681-2.patch, SOLR-5681-2.patch, 
> SOLR-5681-2.patch, SOLR-5681-2.patch, SOLR-5681-2.patch, SOLR-5681-2.patch, 
> SOLR-5681-2.patch, SOLR-5681-2.patch, SOLR-5681-2.patch, SOLR-5681-2.patch, 
> SOLR-5681-2.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
> SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
> SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
> SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
> SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch
>
>
> Right now, the OverseerCollectionProcessor is single threaded i.e submitting 
> anything long running would have it block processing of other mutually 
> exclusive tasks.
> When OCP tasks become optionally async (SOLR-5477), it'd be good to have 
> truly non-blocking behavior by multi-threading the OCP itself.
> For example, a ShardSplit call on Collection1 would block the thread and 
> thereby, not processing a create collection task (which would stay queued in 
> zk) though both the tasks are mutually exclusive.
> Here are a few of the challenges:
> * Mutual exclusivity: Only let mutually exclusive tasks run in parallel. An 
> easy way to handle that is to only let 1 task per collection run at a time.
> * ZK Distributed Queue to feed tasks: The OCP consumes tasks from a queue. 
> The task from the workQueue is only removed on completion so that in case of 
> a failure, the new Overseer can re-consume the same task and retry. A queue 
> is not the right data structure in the first place to look ahead i.e. get the 
> 2nd task from the queue when the 1st one is in process. Also, deleting tasks 
> which are not at the head of a queue is not really an 'intuitive' thing.
> Proposed solutions for task management:
> * Task funnel and peekAfter(): The parent thread is responsible for getting 
> and passing the request to a new thread (or one from the pool). The parent 
> method uses a peekAfter(last element) instead of a peek(). The peekAfter 
> returns the task after the 'last element'. Maintain this request information 
> and use it for deleting/cleaning up the workQueue.
> * Another (almost duplicate) queue: While offering tasks to workQueue, also 
> offer them to a new queue (call it volatileWorkQueue?). The difference is, as 
> soon as a task from this is picked up for processing by the thread, it's 
> removed from the queue. At the end, the cleanup is done from the workQueue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to