[
https://issues.apache.org/jira/browse/LUCENE-6863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982596#comment-14982596
]
David Smiley commented on LUCENE-6863:
--------------------------------------
Did you consider a hash lookup instead of binary-search, as was done in
LUCENE-5688? I just read the comments there and it seems promising for very
sparse data.
Regarding the performance trade-off in your table -- I find it hard to evaluate
to consider if it's worth it. Does {{+214%}} mean the whole query, search &
top-10 doc retrieval took over twice as long? Or is this measurement isolated
to... somehow just the sort part? How fast was this query any way? If we're
making a 3ms query take 9ms then it wouldn't bother me so much as a 300ms query
taking 900ms. Of course it depends on the amount of data.
> Store sparse doc values more efficiently
> ----------------------------------------
>
> Key: LUCENE-6863
> URL: https://issues.apache.org/jira/browse/LUCENE-6863
> Project: Lucene - Core
> Issue Type: Improvement
> Reporter: Adrien Grand
> Assignee: Adrien Grand
> Attachments: LUCENE-6863.patch, LUCENE-6863.patch
>
>
> For both NUMERIC fields and ordinals of SORTED fields, we store data in a
> dense way. As a consequence, if you have only 1000 documents out of 1B that
> have a value, and 8 bits are required to store those 1000 numbers, we will
> not require 1KB of storage, but 1GB.
> I suspect this mostly happens in abuse cases, but still it's a pity that we
> explode storage requirements. We could try to detect sparsity and compress
> accordingly.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]