[
https://issues.apache.org/jira/browse/LUCENE-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12916001#action_12916001
]
Jason Rutherglen commented on LUCENE-2575:
------------------------------------------
I guess another possible solution is to do away with interleaved slices
altogether and simply allocate byte[]s per term and chain them together. Then
we would not need to worry about concurrency with slicing. This would
certainly make debugging easier however it'd add 8 bytes (for the object
pointer) per term, somewhat negating the parallel array cutover. Perhaps it's
just a price we'd want to pay. That and we'd probably still need a unique
posting upto array per reader.
> Concurrent byte and int block implementations
> ---------------------------------------------
>
> Key: LUCENE-2575
> URL: https://issues.apache.org/jira/browse/LUCENE-2575
> Project: Lucene - Java
> Issue Type: Improvement
> Components: Index
> Affects Versions: Realtime Branch
> Reporter: Jason Rutherglen
> Fix For: Realtime Branch
>
> Attachments: LUCENE-2575.patch, LUCENE-2575.patch, LUCENE-2575.patch,
> LUCENE-2575.patch
>
>
> The current *BlockPool implementations aren't quite concurrent.
> We really need something that has a locking flush method, where
> flush is called at the end of adding a document. Once flushed,
> the newly written data would be available to all other reading
> threads (ie, postings etc). I'm not sure I understand the slices
> concept, it seems like it'd be easier to implement a seekable
> random access file like API. One'd seek to a given position,
> then read or write from there. The underlying management of byte
> arrays could then be hidden?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]