here is another approach.

StandardAnalyzer st = new StandardAnalyzer();
StringReader reader= new StringReader("text to index...");
                TokenStream stream = st.tokenStream("content", reader);


Then use the Field constructor such as
*Field<http://lucene.zones.apache.org:8080/hudson/job/Lucene-Nightly/javadoc/org/apache/lucene/document/Field.html#Field%28java.lang.String,%20org.apache.lucene.analysis.TokenStream%29>
*(String<http://java.sun.com/j2se/1.4/docs/api/java/lang/String.html?is-external=true>
name,
TokenStream<http://lucene.zones.apache.org:8080/hudson/job/Lucene-Nightly/javadoc/org/apache/lucene/analysis/TokenStream.html>
 tokenStream)

to add a filed to a Document and then add the Document to the index.





On Jan 7, 2008 7:31 AM, Doron Cohen <[EMAIL PROTECTED]> wrote:

> Or, very similar, wrap the 'real' analyzer A with your analyzer that
> delegates to A but also keeps the returned tokens, possibly by
> using a CachingTokenFilter.
>
> On Jan 7, 2008 7:11 AM, Daniel Noll <[EMAIL PROTECTED]> wrote:
>
> > On Monday 07 January 2008 11:35:59 chris.b wrote:
> > > is it possible to add a document to an index and, while doing so, get
> > the
> > > terms in that document? If so, how would one do this? :x
> >
> > My first thought would be: when adding fields to the document, use the
> > Field
> > constructors which accept a TokenStream and pass in a CachingTokenStream
> > wrapped around the real token stream.
> >
> > Daniel
> >
>

Reply via email to