[EMAIL PROTECTED] wrote:
This isn't quite true. If you open IndexWriter with
autoCommit=false,
then none of the changes you do with it will be visible to an
IndexReader, even one reopened while IndexWriter is doing its work,
until you close the IndexWriter.
Where are the docs for this tran
> This isn't quite true. If you open IndexWriter with autoCommit=false,
> then none of the changes you do with it will be visible to an
> IndexReader, even one reopened while IndexWriter is doing its work,
> until you close the IndexWriter.
Where are the docs for this transaction buffered?
> How about just copying and performing your indexing (or index write
> related)
> operations on the copy and then performing a rename operation followed by
> reopening of the index readers.
This is how we did it until now. But the indexes become bigger and bigger (50
GB and more) and so we are
Anshum wrote:
But the downside to this would be, in case your daemon crashes in the
meantime or you need to restart the daemon, the index would not be
usable
until you have completed your indexing processs.
This isn't quite true. If you open IndexWriter with autoCommit=false,
then none
Hi,
As per my knowledge, you may do any of the below processes while searching
(n parallel) just that the changes would not reflect until you reopen the
index readers (by either using the reopen command or closing and opening
them explicitly).
But the downside to this would be, in case your daemon
The answer to all 3 is yes, but, you'll have to re-open your
IndexReader to see any of those changes.
An IndexReader always searches the "point in time" snapshot of the
index as of the moment it was opened.
Any & all changes done with an IndexWriter (including opening a new
index in the
Hi,
I have some questions about indexing:
1. Is it possible to open indexes with Multireader+IndexSearcher and add
documents to these indexes simultaneously?
2. Is it possible to open indexes with Multireader+IndexSearcher and
optimize these indexes simultaneously?
3. Is it possible to open index
On 1-Aug-07, at 11:34 AM, Joe Attardi wrote:
On 8/1/07, Erick Erickson <[EMAIL PROTECTED]> wrote:
Use a SpanNearQuery with a slop of 0 and specify true for ordering.
What that will do is require that the segments you specify must
appear
in order with no gaps. You have to construct this your
I suspect you're going to have to deal with wildcards if you really want
this functionality.
Erick
On 8/1/07, Joe Attardi <[EMAIL PROTECTED]> wrote:
>
> On 8/1/07, Erick Erickson <[EMAIL PROTECTED]> wrote:
> >
> > Use a SpanNearQuery with a slop of 0 and specify true for ordering.
> > What that w
On 8/1/07, Erick Erickson <[EMAIL PROTECTED]> wrote:
>
> Use a SpanNearQuery with a slop of 0 and specify true for ordering.
> What that will do is require that the segments you specify must appear
> in order with no gaps. You have to construct this yourself since there's
> no support for SpanQueri
Think of a custom analyzer class rather than an custom query parser. The
QueryParser uses your analyzer, so it all just "comes along".
Here's the approach I'd try first, off the top of my head
Yes, break the IP and etc. up into octets and index them
tokenized.
Use a SpanNearQuery with a slop
Hi Erick,
First, consider using your own analyzer and/or breaking the IP addresses
> up by substituting ' ' for '.' upon input.
Do you mean breaking the IP up into one token for each segment, like ["192",
"168", "1", "100"] ?
> But on to your question. Please post what you mean by
> "a large n
First, consider using your own analyzer and/or breaking the IP addresses
up by substituting ' ' for '.' upon input. Otherwise, you'll have endless
issues as time passes..
But on to your question. Please post what you mean by
"a large number". 10,000? 1,000,000,000? we have no clue
from your po
Hi again, everyone. First of all, I want to thank everyone for their
extremely helpful replies so far.
Also, I just started reading the book "Lucene in Action" last night. So far
it's an awesome book, so a big thanks to the authors.
Anyhow, on to my question. As I've mentioned in several of my pre
14 matches
Mail list logo