Hi,

thanks for your reply. In several other implementations I’ve seen this pattern 
of using a while(input.incrementToken()) within the filter’s incrementToken 
method. Is this approach recommended or are there hidden traps (eg: memory 
consumption, dependency on filter ordering and so on) 


Best,
Edoardo

> On 21 Apr 2017, at 17:32, Ahmet Arslan <iori...@yahoo.com.INVALID> wrote:
> 
> Hi,
> LimitTokenCountFilter is used to index first n tokens. May be it can inspire 
> you.
> 
> Ahmet
> On Friday, April 21, 2017, 6:20:11 PM GMT+3, Edoardo Causarano 
> <edoardo.causar...@gmail.com> wrote:
> Hi all.
> 
> I’m relatively new to Lucene, so I have a couple questions about writing 
> custom filters.
> 
> The way I understand it, one would extend 
> org.apache.lucene.analysis.TokenFilter and override #incrementToken to 
> examine the current token provided by a stream token producer.
> 
> I’d like to write some logic that considers the last n seen tokens therefore 
> I need to access this context as the filter chain is scanning the stream.
> 
> Can anyone point to an example of such a construct? 
> 
> Also, how would I access and update this context keeping multithreading in 
> mind? Actually, what is the treading model of a TokenStream, can anyone point 
> out a good summary for it?
> 
> TIA
> 
> 
> Best,
> Edoardo
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to