Unfortunately, the FST based suggesters currently must be HEAP
resident.  In theory this is fixable, e.g. if we could map the FST and
then access it via DirectByteBuffer ... maybe open a Jira issue to
explore this possibility?

You could also try AnalyzingInfixSuggester; it uses a "normal" Lucene
index (though, it does load things up into in-memory DocValues fields
by default).  And of course it differs from the other suggesters in
that it's not "pure prefix" matching.  You can see it running at
http://jirasearch.mikemccandless.com ... try typing fst, for example.



Mike McCandless

http://blog.mikemccandless.com


On Wed, Aug 7, 2013 at 9:32 AM, Anna Björk Nikulásdóttir
<anna.b....@gmx.de> wrote:
> Hi,
>
> I am using Lucene 4.3 on Android for terms auto suggestions (>500.000). I am 
> using both FuzzySuggester and AnalyzingSuggester, each for their specific 
> strengths. Everything works great but my app consumes 69MB of RAM with most 
> of that dedicated to the suggester classes. This is too much for many older 
> devices and Android imposes RAM limits for those.
> As I understand, these suggester classes consume RAM because they use in 
> memory automatons. Is it possible - similar to Lucene indexes - to have these 
> automatons rather on "disk" than in memory or is there an alternative 
> approach with similarly good results that works with most data from 
> disk/flash ?
>
> regards,
>
> Anna.
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to