Yes with the lookup api. It return the token with <b> </b> appended that`s why it has to interact with api. But how to how to iterate over my real index In scala:
def infixSuggest(){ val sourceindex = new File("/tmp/lucene/1374960475771") val reader = DirectoryReader.open(FSDirectory.open(sourceindex)) val searcher = new IndexSearcher(reader) val suggester = new AnalyzingInfixSuggester(Version.LUCENE_44,new File("/tmp/lucene/1374619766134"),analyzer,analyzer,3) val indReader = searcher.getIndexReader val maxid = indReader.maxDoc() for(id<-0 until maxid){ id+1 val doc = searcher.doc(id) val keys = Array[TermFreqPayload](new TermFreqPayload(doc.get("contents"), 3, new BytesRef(doc.get("id")))) val iterator = new TermFreqPayloadArrayIterator(keys) suggester.build(iterator) } } Figure out that AnalyzingInfixSuggester has to be in its own index and use source index(the one i like to get words from it). It works fine but for every increment its start from clean index i.e it tokenized and indexed only the last document. Mike said that it has no incremental support, so how to pass my source index.