addIndexesNoOptimize is only for shards.
But this [pending patch/contribution] is similar what you're seeking, I think:
https://issues.apache.org/jira/browse/LUCENE-1879
It does not actually merge the indexes, but rather keeps 2 parallel
indexes in sync so you can use ParallelReader to search
AHMET ARSLAN wrote:
>> Looks like its because the query
>> coming in is a ComplexPhraseQuery and
>> the Highlighter doesn't current know how to handle that
>> type.
>>
>> It would need to be rewritten first barring the special
>> handling it
>> needs - but unfortunately, that will break multi-term
: Am working with clucene.Kindly tell the forum to find my solution
a web search for "clucene forum" turns up this as the top result...
http://sourceforge.net/mailarchive/forum.php?forum_name=clucene-developers
-Hoss
-
To un
This is an issue found by Findbugs.
In the file FSDirectory the method void touchFile() should return the boolean
result of the setLastModified method call.
public abstract class FSDirectory extends Directory {
@Override
public void touchFile(String name) {
ensureOpen();
File file =
If the query is a very selective one, you can go through the XML
document and do the counting.
If the query is not so selective, which is usually the case, and the
number of matches are large, basically all the values need to be loaded
into memory, or solid state disk, to do a fast counting.
> Looks like its because the query
> coming in is a ComplexPhraseQuery and
> the Highlighter doesn't current know how to handle that
> type.
>
> It would need to be rewritten first barring the special
> handling it
> needs - but unfortunately, that will break multi-term query
> highlighting
> unle
If you need faceting on top of Lucene and you're not using Solr, Bobo-browse
( http://bobo-browse.googlecode.com ) is a high-performance open source
faceting library which may suit your needs. You're asking for "all facet
values", which in bobo isn't terribly hard to get: because of the way bobo
k
Given two parallel indexes which contain the same products but different
fields, one with slowly changing fields and one with fields which are
updated regularly:
Is it possible to periodically merge these to form a single index? (thereby
representing a frozen snapshot in time)
For example: Ca
Given two parallel indexes which contain the same products but different
fields, one with slowly changing fields and one with fields which are
updated regularly:
Is it possible to periodically merge these to form a single index? (thereby
representing a frozen snapshot in time)
For example: Can
Given two parallel indexes one with slowly changing fields and one with
fields which are updated regularly.
Is it possible to periodically merge these to form a single index? (thereby
representing a frozen snapshot in time)
For example: Can indexWriter.addIndexesNoOptimize handle this, or was
On Tue, Nov 3, 2009 at 9:45 AM, Ganesh wrote:
> My IndexReader and Searcher is open all the time. I am reopening it at
> constant interval.
>
> Below are the code sequence.
>
> 1. DB optimize
> 2. Close writer
> 3. Open writer
> 4. Reopen new reader
> 5. Close old reader
> 6. Close old searcher.
My IndexReader and Searcher is open all the time. I am reopening it at constant
interval.
Below are the code sequence.
1. DB optimize
2. Close writer
3. Open writer
4. Reopen new reader
5. Close old reader
6. Close old searcher.
>>If the old IndexReader is still open when the optimize completes
On Tue, 2009-11-03 at 10:23 +0100, Henrik Hjalmarsson wrote:
> I have gotten a demand for an API method that returns an XML response,
> listing all the indexes in this application and the number of unique
> values these indexes can have, filtered by a query that is recieved in
> the method request.
In my experience the most frequent cause is not getting an exact match
on terms. Are you sure that you have an exact match i.e. you're not
analyzing "MyUniqueId" to "myuniqueid"?
How are you determining that the call was successful? You could try
running a query first to verify that a doc you th
Paul Taylor wrote:
For backwards compatabiity I have to change queries for the track
field to the recording field. I did this by overriding
QueryParser.newQuery() as follows
protected Query newTermQuery(Term term) {
if ( term.field() == "track" ) {
return super.newTermQuery(
Any leads in this issue would be greatly helpful.
My logs shows me the updateDocument call was successful but the status of the
document was not updated. Using Luke, I could see the same document (before
update) is added at the end. Its document Id shows it as last record but it was
inserted an
> Could anyone comment on how should I handle one-to-many relationship of
> domain objects in lucene? I have been searching the archive but was unable
> to find any answer. I have read about Compass but I am afraid it will also
> cost some performance penalty, any link to performance comparison wil
Hi,
Could anyone comment on how should I handle one-to-many relationship of
domain objects in lucene? I have been searching the archive but was unable
to find any answer. I have read about Compass but I am afraid it will also
cost some performance penalty, any link to performance comparison will b
On Tue, Nov 3, 2009 at 5:21 AM, Dinh wrote:
> Hi Michael,
>
>> Does that mean you no longer see the original problem (changes not
>> being reflected)?
>
> Yes. The deleted documents do not appear in search results any more. I am
> not sure that if they are flushed to disk
> at that time yet but at
Hi Michael,
> Does that mean you no longer see the original problem (changes not
> being reflected)?
Yes. The deleted documents do not appear in search results any more. I am
not sure that if they are flushed to disk
at that time yet but at least there is a sign that they are "deleted". I
have st
It depends on the relative timing.
If the old IndexReader is still open when the optimize completes then
the files it has open cannot be deleted.
But, if that IndexReader hadn't been reopened in a while, it's
possible it did not in fact have the files of the just-merged segments
open (in which ca
On Tue, Nov 3, 2009 at 4:24 AM, Dinh wrote:
> Hi Michael,
>
> Thank a lot for your advice
>
>> Can you verify you are in fact reopening the reader that's reading the
>> same Directory the writer is writing to?
>
> Yes. I have a single and configurable index path. So I can not make a
> mistake here
Well, lucene is blazingly quick and sometimes things take less time
than one might expect, but your combination of large and very large is
not encouraging. It doesn't sounds like the new API method would
necessarily need an exact reply - could you run something in the
background out of peak hours
Hello
I am trying to develop an API for a search application that is using Lucene
2.4.1
The search application is maintained by RAA (swedish goverment organization
that keeps track of historical and cultural data).
I have gotten a demand for an API method that returns an XML response, listing
Hi Michael,
Thank a lot for your advice
> Can you verify you are in fact reopening the reader that's reading the
> same Directory the writer is writing to?
Yes. I have a single and configurable index path. So I can not make a
mistake here
> Also, you are failing to close the old reader after op
I am reopening the reader and closing the old one. I am not facing this issue
in Linux otherwise Linux will show (deleted) under /proc/pid/fd/.
searcher.getIndexReader().close();
searcher.close();
Regards
Ganesh
- Original Message -
From: "Michael McCandless"
To:
Sent: Monday, Novemb
StandardAnalyzer will, amongst other things, convert everything to
lowercase which means that term queries on mixed or upper case text
will fail to match.
There is some info on indexing XML docs in the FAQ
http://wiki.apache.org/lucene-java/LuceneFAQ#How_can_I_index_XML_documents.3F
and I'm sure t
27 matches
Mail list logo