hi, when i try to view my index with luke i get the loading error: "No sub-file
with id _18.f0 found".
any ideas what could be causing this?
im using IndexWriter.setUseCompoundFile(true)
in the past it has worked fine without any problems, im on win xp with java 1.5
Regards,
[EMAIL PROTECTED
Two queries about ranges:
1. field:[a TO z] does not return the same as field:[z TO a]
I think it should. The standard QueryParser or even the range query should
ascertain the lowest and highest and switch them around if necessary
2. How do I search for negative numbers in a range. For example
Hi
A database is used as our primary data store. Our lucene index is then
created and updated from this database.
We store the value of the database primary key in the lucene index as we
need to be able to identify documents across the database and lucene
index.
New documents are inserted into
I'm only a novice at these things but if I had to do that right now? I'd add
a document that represents that primary key with a value representing the
next available number and everytime I go to do some additions I'd get it,
use the value and then delete and re-add that document with the revised ne
Hello Lucene members,
I'm the silent member of lucene.I hve
being using lucene from last 6 to 8 months.I hve finished with
indexing,searching stuff successfully.
Now i'm stuck up with one stuff i.e reindexing.I got some help from some
lucene members abt thi
I am periodically getting "Too many open files" error when searching. Currently
there are over 500 files in my Lucene directory. I am attempting to run
optimize( ) to reduce the number of files. However, optimize never finishes
because whenever I run it, it quits with a Java exception OutOfMemor
Hello All,
I am testing the boost value within the latest version of Lucene and I'm
inspecting the results through Luke.
For each FIELD that I want to boost I use the setBoost method. And
everything looks good. But Luke is refusing to exposure the boost value
and keeps returning 1.0 for th
I'm pretty sure this is a bug or incompatibility with Luke - I'm using
boosted documents, and I seem to remember that Luke reported everything
as 1.0, even though my test applications showed things correctly.
The boost in the final app is working fine, so the functionality of
Lucene appears to be
Eh,
I did some some tests after I sent the email to see if I was doing it
right and they confirmed your results. The scores were higher for
boosted content.
Thanks for the reply
[EMAIL PROTECTED] wrote:
I'm pretty sure this is a bug or incompatibility with Luke - I'm using
boosted document
Hi Otis, sorry if I posted to the wrong group. I though user was for
usage-type queries and dev was for development-type queries. As I was asking
about changing the code itself (rather than about interfacing with it) I
assumed this was a dev forum issue. I'm still a bit confused.. can you tell
me w
Steve Rajavuori wrote:
I am periodically getting "Too many open files" error when searching. Currently
there are over 500 files in my Lucene directory. I am attempting to run optimize( ) to
reduce the number of files. However, optimize never finishes because whenever I run it,
it quits with a
[EMAIL PROTECTED] wrote:
I'm pretty sure this is a bug or incompatibility with Luke - I'm using
boosted documents, and I seem to remember that Luke reported everything
as 1.0, even though my test applications showed things correctly.
The boost in the final app is working fine, so the functiona
Ah - that's useful to know. Although in that case I'd suggest that the
sensible thing for Luke to do would be to either remove the boost field,
or show it as "unavailable", instead of (misleadingly) displaying it as
1.0...
Cheers,
Tim.
-Original Message-
From: Andrzej Bialecki [mailto:[
Eh yeah. I remember reading that. The boost is calculated at indexing
time.
That didn't even click. Thanks.
Andrzej Bialecki wrote:
[EMAIL PROTECTED] wrote:
I'm pretty sure this is a bug or incompatibility with Luke - I'm using
boosted documents, and I seem to remember that Luke reported e
: What is the easist way to identify the maximum or highest primary key
: value in the lucene index?
The most straight forward way is to do a search for all documents, ordered
by the field you are interested in, and then get the value out of hte
first document.
Under the covers, sorting works us
On 1/23/06, Chris Hostetter <[EMAIL PROTECTED]> wrote:
> use a TermEnum to iterate over all the values of the field, remembering
> the "previous" value each time untill you run out of values
Bummer that you can't step backwards with a TermEnum... finding the
first term is cheap, but finding the la
On Monday 23 January 2006 13:10, Gwyn Carwardine wrote:
...
>
> And now I've been pointed to QueryParser.jj I wonder what language that is?
> And is QueryParser.javaj create from it? If so how does it get from one to
> the other?! Help!
>
QueryParser.java is generated from QueryParser.jj by jav
I am looking for Lucene API/work around to answer the following
questions. I need to find if a particular field contains terms with
certain length. Let me explain with an example.
Lets say there are
Four fields: F1, F2 and F3
Hypothetical User Query: "F1:lucene F2:* F3:? F4:group"
I am aware t
Hi,
Apologies if this question has being asked before on this list.
I am working on an application with a Lucene index whose performance
(response time for a query) has started degrading as its size has
increase.
The index is made up of approximately 10 million documents that have
11 fields. Th
Lucene scales with the number of unique terms in the index and not the
number of documents nor the size of the documents. Typically, you
should have at most 1 million unique terms for a set of 10 million
documents.
So the fact that you have 13 million unique terms in 10 million
documents tells me
:
: The index is made up of approximately 10 million documents that have
: 11 fields. The average document size is less then 1k. The index has
: a total of 13 million terms. The total index size is about 2.2 gig.
: The index is being updated relatively aggressively. In a 24hr period
: there may
Hi,
Apologies if this question has being asked before on this list.
I am working on an application with a Lucene index whose performance
(response time for a query) has started degrading as its size has
increase.
The index is made up of approximately 10 million documents that have
11 fields. Th
Note: forwarded message attached.
Yahoo! Photos
Got holiday prints? See all the ways to get quality prints in your hands ASAP.--- Begin Message ---
Hello Lucene members,
I'm the silent member of lucene.I hve
being using lucene from last 6 to 8 months.I hve
23 matches
Mail list logo