Michael McCandless wrote:
To workaround this, on catching an OOME on any of IndexWriter's
methods, you should 1) forcibly remove the write lock
(IndexWriter.unlock static method)
IndexWriter.unlock(*) is 2.4 only.
Use the following instead:
directory.makeLock(IndexWriter.WRITE_LOCK_NAME)
your problem is about new one Document for all sourcefiles
i think you have 2 solution
1.new document in first step of loop(like below code)
[code]
for (int i = 0; i < sourcefiles.size(); i++) {
for (int j = 0; j <
sourcefiles.elementAt(i).getNumberOfRevisions(
Was there something that you changed or some disk issues?
The code seemed fine. Moreover, I could have asked the version of lucene you
are using, but that seems out of question for the issue as you say that it
is just the same indexer running on the same source and machine that is now
taking longer
Hi Gurus,
We are using Lucene for creating indexes on some database column and
suddenly my indexcreation time sems to have increased considerably,
Here is the code snippet we are using , we are wondering how come the
index creation has increaed suddenly..any pointer please
Date start = n
Yeah, I saw the change to flush(). Trying to work out the correct
strategy for our IndexWriter handling now. We probably should not be
using autocommit for our writers.
It was brought up by others that the OutOfMemoryError handling
requirements are a fairly strong part of the contract now - bu
Sorry I forgot to follow up with the issue, but yup that's the one.
I did also fix IW to disallow flush after it has seen an OOME.
Mike
Jed Wesley-Smith wrote:
Michael,
https://issues.apache.org/jira/browse/LUCENE-1429
Thanks mate. I'll try and work out the client handling policy of the
Michael,
https://issues.apache.org/jira/browse/LUCENE-1429
Thanks mate. I'll try and work out the client handling policy of the
IndexWriter calls. I see that flush now aborts the transaction as well...
cheers,
jed.
Michael McCandless wrote:
Woops, you're right: this is a bug. I'll open an
Also, Can I change maxMergeDocs?
Thanks.
Tom
-Original Message-
From: Mark Miller [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 28, 2008 1:39 PM
To: java-user@lucene.apache.org
Subject: Re: Change the merge factor for an existing index?
Just change it. Merges will start obeying the new
hmm yes thats true. omg. but in meantime i solved the problem by making a new
database query to the message using the ids i extract with the search. thx a
lot for the help.
- Ursprüngliche Mail -
Von: "Daniel Noll" <[EMAIL PROTECTED]>
An: java-user@lucene.apache.org
Gesendet: Dienstag, 2
Sebastian Müller wrote:
Hi Erick
thank you for you fast reply. the problem with the ID and the wrong
results is solved. that was kinda noob failure of me ;)
but the message is still null. but i think somehow i can fix that ;)
You had passed Field.Store.NO for the message, so it isn't particul
Hi Erick
thank you for you fast reply. the problem with the ID and the wrong results is
solved. that was kinda noob failure of me ;)
but the message is still null. but i think somehow i can fix that ;)
regards sebastian
- Ursprüngliche Mail -
Von: "Erick Erickson" <[EMAIL PROTECTED]>
I think your root problem is that you're using the same Document
over and over to add to the index. Your inner loop should be
something like:
for (int j = 0; j < sourcefiles.elementAt(i).getNumberOfRevisions();
j++)
> {
Document doc = new Document()
>
>doc.add(new Field("id",
>
hi folks,
i have great trouble while using lucene to implement search functionality to my
application:
this way i index:
[code]
public void indexData() throws CorruptIndexException,
LockObtainFailedException, IOException {
Analyzer analyzer = new StandardAnalyzer();
Just change it. Merges will start obeying the new merge factor
seamlessly.
- Mark
On Oct 27, 2008, at 1:07 PM, Tom Saulpaugh <[EMAIL PROTECTED]>
wrote:
Hello,
We are currently using lucene v2.1 and we are planning to upgrade to
lucene v2.4.
Can we change the merge factor for an existi
hi folks,
i have great trouble while using lucene to implement search functionality to
my application:
this way i index:
[code]
public void indexData() throws CorruptIndexException,
LockObtainFailedException, IOException {
Analyzer analyzer = new StandardAnalyzer();
Woops, you're right: this is a bug. I'll open an issue, fold in your
nice test case & fix it. Thanks Jed!
On hitting OOM, IndexWriter marks that its internal state (buffered
documents, deletions) may be corrupt and so it rollsback to the last
commit instead of flushing a new segment.
To worka
16 matches
Mail list logo