I've got some follow-up from the user.
> Is it possible disk filled up? Though I'd expect an IOE during write
> or close in that case.
>
> In this case nothing should be lost in the index: the merge simply
> refused to commit itself, since it detected something went wrong. But
> I believe we al
http://www.lucidimagination.com/blog/2011/09/12/learn-lucene/ - pasted below too
Hi everyone... I'm not usually much on advertising/hyping events where I speak
and teach, but I'm really interested in drumming up a solid attendance for our
Lucene training that I'll be teaching at Lucene EuroCon i
Hi,
here is the code:
writer.commit(); // make sure nothing is buffered
mgr.printIndexState("Expunging deletes using " + writer
.getConfig().getMergePolicy());
setDirectLogger(); // redirect infoStream toward log4j
writer.expungeDeletes();
Hmm... are you using IndexReader.numDeletedDocs to check?
Did you commit from the writer and then reopen the IndexReader before
calling .numDeletedDocs? Else the reader won't see the change.
Mike McCandless
http://blog.mikemccandless.com
On Sat, Sep 10, 2011 at 11:58 PM, wrote:
> Hi, even wi
I committed a jdoc fix.
Mike McCandless
http://blog.mikemccandless.com
On Mon, Sep 12, 2011 at 10:00 AM, Mark Miller wrote:
>
> On Sep 9, 2011, at 3:35 PM, Robert Muir wrote:
>
>> On Fri, Sep 9, 2011 at 3:07 PM, Uwe Schindler wrote:
>>> Hi,
>>>
>>> This is still some kind of bug, because expun
Hi all,
That's right, hold on to your hats, we're holding another London Search
Social on the 18th Oct.
http://www.meetup.com/london-search-social/events/33218292/
Venue is still TBD, but highly likely to be a quiet(ish) central London pub.
There's usually a healthy mix of experience and backgro
One way to do this is to create an Analyzer and Tokenizer that are
used on both index and search side. In the tokenStream method, you
return a new normalizing tokenizer; in the Tokenizer, you override the
normalize method to ignore apostrophes.
--dho
2011/9/12 SBS :
> In out situation we need it
On Sep 9, 2011, at 3:35 PM, Robert Muir wrote:
> On Fri, Sep 9, 2011 at 3:07 PM, Uwe Schindler wrote:
>> Hi,
>>
>> This is still some kind of bug, because expungeDeletes is documented to
>> remove all deletes. Maybe we need to modify MergePolicy?
>>
>
> we should correct the javadocs for exp
Hello, everyone!
Could anyone please explain how to get offsets for hits? I.e. I have a big text
file and want to find some string in it. As a result of this operation, I need
an array of offsets (in characters) from the beginning of the file for each
occurrence of the string.
As an example,
Thanks Michael,
at least I would say that it would be good to state this clearly in Lucene
API.
More specifically, both the WeightedSpanTermExtractor#getWeightedSpanTerms
and #getWeightedSpanTermsWithScores should reflect this in Javadoc. Because
both execute private method extract(Query query,
M
10 matches
Mail list logo