Re: standardTokenizer - how to terminate at End of Stream

2005-09-21 Thread Beady Geraghty
Thank you for your response. That was my original goal. On 9/21/05, Chris Hostetter <[EMAIL PROTECTED]> wrote: > > > : Since I used the StandAnalyzer when I originally created the index, > : I therefore use the StandardTokenizer to tokenize the input stream. > : Is there a better way to do what I

Re: standardTokenizer - how to terminate at End of Stream

2005-09-21 Thread Chris Hostetter
: Since I used the StandAnalyzer when I originally created the index, : I therefore use the StandardTokenizer to tokenize the input stream. : Is there a better way to do what I try to do ? : From your comment below, it appears that I should just use next() instead if your goal is to recreate

Re: standardTokenizer - how to terminate at End of Stream

2005-09-21 Thread Beady Geraghty
Thank you for the response. I was trying to do something really simple - I want to extract the context for terms and phrases from files that satisfy some (many) queries. I *know* that file test.txt is a hit (because I queried the index, and it tells me that test.txt satisfies the query). Then, I o

Re: standardTokenizer - how to terminate at End of Stream

2005-09-21 Thread Erik Hatcher
Could you elaborate on what you're trying to do, please? Using StandardTokenizer in this low-level fashion is practically unheard of, so I think knowing what you're attempting to do will help us help you :) Erik On Sep 21, 2005, at 12:17 PM, Beady Geraghty wrote: I see some definiti

Re: standardTokenizer - how to terminate at End of Stream

2005-09-21 Thread Beady Geraghty
I see some definitions in StandardTokenizerConstants.java Perhaps these are the values for t.kind. Perhaps, I was confused between between the usage of getNextToken() and next() in the standard analyzer. When should one use getNextToken() instead of next() I am just starting to use Lucene, so ple