Index files should not be disappearing unless you're using the form
of opening an indexwriter that creates a new index. We'd need to see
the code you use top open the IW to provide more help.
If all you're doing is looking at the index directory, segments will disappear
as they are merged so that'
You can ignore the warning.
But you haven't told us a thing about *how* the failure occurs or
what gets reported. What exactly are you doing? What exactly
fails (i.e.do you just not find files? Get a stack trace? Get a
"class not found error"?)
We really cannot help at all without more informatio
Hi All,
I have following questions about lucene indexWriter. I am using version
3.1.0.
While indexing documents,
1. When is the good time to commit changes? (indexWriter.commit) or just
close the writer after the indexing is done so that commit automatically
happens.
2. When is the good time to m
Hi Mike and Simon,
Thanks again for your help, but I've created my own solution by writing a
custom span query.
Now, I can perform searches where some of the terms that I supply in the query
can be missing from the result.
This way it allows for a slop at the query side AND on the result side.
Hi,
I am responsible for moving a Teragram application to Lucene. I have
identified the following issues so I would like verification that what
the existing rules have do not exist in Lucene or there is a work-around.
1) Teragram uses a Polish Notation for its rules, i.e.
Note: DIST_2 is a pro
Hello,
there is a index with a lot of docs, 2 of them are:
doc1:
1.Field=idITSVopfOLB=ITS---f0-- Value= 192
2.Field=name ITSVopfOLB=ITS0-- Value= queen
doc2:
1.Field=idITSVopfOLB=ITS---f0-- Value= 701492
2.Field=name ITSVopfOLB=ITS0-- V
On Fri, Jul 15, 2011 at 4:45 PM, Uwe Schindler wrote:
> Hi,
>
>> The crappy thing is that to actually detect if there are any tokens in the
>> field
>> you need to make a TokenStream which can be used to read the first token
>> and then rewind again. I'm not sure if there is such a thing in Luce