[
https://issues.apache.org/jira/browse/LUCENE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Robert Muir updated LUCENE-4364:
--------------------------------
Attachment: LUCENE-4364.patch
updated patch: I added explicit CFS versions of all the mmap tests for better
testing (so we dont rely upon the random tests or other tests).
also fixed the checks in seek to properly catch negative positions. previously
if you seeked a slice that had offset > 0 (the start offset into the first
mapped buffer), and passed a negative pos but pos was < -offset, we wouldnt
catch the negative access, instead positioning ourselves to a negative
getFilePointer and pointed at a previous file's bytes.
I feel good about this patch. I'd like to commit to trunk soon if there are no
objections.
> MMapDirectory makes too many maps for CFS
> -----------------------------------------
>
> Key: LUCENE-4364
> URL: https://issues.apache.org/jira/browse/LUCENE-4364
> Project: Lucene - Core
> Issue Type: Bug
> Reporter: Robert Muir
> Attachments: LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch,
> LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch, LUCENE-4364.patch,
> LUCENE-4364.patch
>
>
> While looking at LUCENE-4123, i thought about this:
> I don't like how mmap creates a separate mapping for each CFS slice, to me
> this is way too many mmapings.
> Instead I think its slicer should map the .CFS file, and then when asked for
> an offset+length slice of that, it should be using .duplicate()d buffers of
> that single master mapping.
> then when you close the .CFS it closes that one mapping.
> this is probably too scary for 4.0, we should take our time, but I think we
> should do it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]