Also, keep in mind that optimization is a very disk intense process (and
therefore slow). It completely rewrites the index, and should only be done when
you are not expecting the index to change for a while.
-Original Message-
From: "Daniel Naber" <[EMAIL PROTECTED]>
Sent: Sunday, July
ade if possible. 1.6.0_06 is out, and I was wondering if anyone had tested
it with Lucene 2.3.2?
Thanks,
Stu Hood
Architecture Software Developer
Mailtrust, a Division of Rackspace
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For addi
Solr does not do distributed indexing, but the development version _does_ do
distributed search, in addition to replication. Currently, you can manually
shard up your data to a set of Solr instances, and then query them by adding a
'shard=localhost:8080/solr_1,localhost:8080/solr_2' parameter.
Hey Mike,
Thank you very much for looking into this issue!
I originally switched to the SerialMergeScheduler to try and work around this
bug: http://lucene.markmail.org/message/awkkunr7j24nh4qj . I switched back to
the ConcurrentMergeScheduler yesterday (since I would rather fail quickly due
t
erges: _mk:C70616 _ml:C88 _mp:C5
_mo:C9905 [total 1 pending]
IW 0 [main]: now commit transaction
IW 0 [main]: checkpoint: wrote segments file "segments_n"
IFD [main]: now checkpoint "segments_n" [11 segments ; isCommit = true]
IFD [main]: deleteCommits: now remove commit &quo
Hey Mike,
Thanks for your input... the 'IndexWriter.close' call was actually in a
'finally' block around the merge code, without a 'catch', which I realized may
have been hiding the exception (I didn't realize close would block if an
exception had occurred).
I've moved the close out of the fin
rget directory...");
this.targetDirectory.close();
System.out.println("...done.");
"""
... and the output clearly shows that we get stuck in close.
---