[
https://issues.apache.org/jira/browse/LUCENE-3892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13431951#comment-13431951
]
Adrien Grand commented on LUCENE-3892:
--------------------------------------
bq. Curiously it seems even faster than w/ acceptableOverheadRatio=0.2! But it
makes it clear we should do a hard cutover.
I had been doing some tests with the bulk version of PackedInts.get (which uses
the same methods that we use for BlockPacked) while working on LUCENE-4098 and
it seemed that the bottleneck was more memory bandwidth than CPU (for large
arrays at least). If you look at the last graph of
http://people.apache.org/~jpountz/packed_ints3.html, the throughput seems to
depend more on the memory efficiency of the picked impl than on the way it
stores data. Maybe we are experiencing a similar phenomenon here...
Unless I am missing something, the only difference between BlockPacked and
Block is that BlockPacked decodes directly from byte[] whereas Block uses
ByteBuffer.asLongBuffer to translate from bytes to ints and then decodes from
the ints... Interesting to know it has so much overhead...
> Add a useful intblock postings format (eg, FOR, PFOR, PFORDelta,
> Simple9/16/64, etc.)
> -------------------------------------------------------------------------------------
>
> Key: LUCENE-3892
> URL: https://issues.apache.org/jira/browse/LUCENE-3892
> Project: Lucene - Core
> Issue Type: Improvement
> Reporter: Michael McCandless
> Labels: gsoc2012, lucene-gsoc-12
> Fix For: 4.1
>
> Attachments: LUCENE-3892-BlockTermScorer.patch,
> LUCENE-3892-blockFor&hardcode(base).patch,
> LUCENE-3892-blockFor&packedecoder(comp).patch,
> LUCENE-3892-blockFor-with-packedints-decoder.patch,
> LUCENE-3892-blockFor-with-packedints-decoder.patch,
> LUCENE-3892-blockFor-with-packedints.patch, LUCENE-3892-blockpfor.patch,
> LUCENE-3892-bulkVInt.patch, LUCENE-3892-direct-IntBuffer.patch,
> LUCENE-3892-for&pfor-with-javadoc.patch, LUCENE-3892-handle_open_files.patch,
> LUCENE-3892-non-specialized.patch,
> LUCENE-3892-pfor-compress-iterate-numbits.patch,
> LUCENE-3892-pfor-compress-slow-estimate.patch, LUCENE-3892_for_byte[].patch,
> LUCENE-3892_for_int[].patch, LUCENE-3892_for_unfold_method.patch,
> LUCENE-3892_pfor_unfold_method.patch, LUCENE-3892_pulsing_support.patch,
> LUCENE-3892_settings.patch, LUCENE-3892_settings.patch
>
>
> On the flex branch we explored a number of possible intblock
> encodings, but for whatever reason never brought them to completion.
> There are still a number of issues opened with patches in different
> states.
> Initial results (based on prototype) were excellent (see
> http://blog.mikemccandless.com/2010/08/lucene-performance-with-pfordelta-codec.html
> ).
> I think this would make a good GSoC project.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]