Thank you for your response.
That was my original goal.
On 9/21/05, Chris Hostetter <[EMAIL PROTECTED]> wrote:
>
>
> : Since I used the StandAnalyzer when I originally created the index,
> : I therefore use the StandardTokenizer to tokenize the input stream.
> : Is there a better way to do what I
: Since I used the StandAnalyzer when I originally created the index,
: I therefore use the StandardTokenizer to tokenize the input stream.
: Is there a better way to do what I try to do ?
: From your comment below, it appears that I should just use next() instead
if your goal is to recreate
Thank you for the response.
I was trying to do something really simple - I want to extract the context
for
terms and phrases from files that satisfy some (many) queries.
I *know* that file test.txt is a hit (because I queried the index, and
it tells me that test.txt satisfies the query). Then, I o
Could you elaborate on what you're trying to do, please?
Using StandardTokenizer in this low-level fashion is practically
unheard of, so I think knowing what you're attempting to do will help
us help you :)
Erik
On Sep 21, 2005, at 12:17 PM, Beady Geraghty wrote:
I see some definiti
I see some definitions in StandardTokenizerConstants.java
Perhaps these are the values for t.kind.
Perhaps, I was confused between between the usage of
getNextToken() and next() in the standard analyzer.
When should one use getNextToken() instead of next()
I am just starting to use Lucene, so ple