Forget all I said! I managed to answer a question that was not there! :)

If you have the term vectors stored it is fairly quick to re-assemble a token stream from the document using a TermVectorMapper. Otherwise it will be really slow.


--
karl

22 jan 2008 kl. 08.04 skrev Karl Wettin:


21 jan 2008 kl. 16.37 skrev Ard Schrijvers:

is there a way to reuse a Lucene document which was indexed and analyzed
before, but only one single Field has changed?

I don't think you can reuse document instances like that, you could however pre-tokenize them fields that will stay the same and reuse the tokens in all documens (fields), perhaps using a CachingTokenFilter.

http://lucene.zones.apache.org:8080/hudson/job/Lucene-Nightly/javadoc/org/apache/lucene/document/Field.html#Field(java.lang.String,%20org.apache.lucene.analysis.TokenStream)


--
karl


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to