Hi folks. I've been experimenting with our new scalar quantization
support - yay, thanks for adding it! I'm finding that when I index a
large number of large vectors, enabling quantization (vs simply
indexing the full-width floats) requires more heap - I keep getting
OOMs and have to increase heap size. I took a heap dump, and not
surprisingly I found some big arrays of floats and bytes, and the
first one I traced was referenced by vector writers involved in a
merge (Lucene99FlatVectorsWriter.FieldsWriter.vectors). Is this
expected? I wonder if there is an opportunity to move some of this
off-heap?  I can imagine that when we requantize we need to scan all
the vectors to determine the new quantization settings?  Maybe we
could do two passes - merge the float vectors while recalculating, and
then re-scan to do the actual quantization?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to