"Erick Erickson" <[EMAIL PROTECTED]> wrote on 25/02/2007 07:05:21:
> Yes, I'm pretty sure you have to index the field (UN_TOKENIZED) to be
able
> to fetch it with TermDocs/TermEnum! The loop I posted works like this
Once indexing the database_id field this way, also the newly added
API IndexW
Hi Karl,
Seems I missed this email...
What is the status of this, have you solved it?
Doron
karl wettin <[EMAIL PROTECTED]> wrote on 13/02/2007 03:24:44:
>
> 13 feb 2007 kl. 04.33 skrev Doron Cohen:
>
> >> Running (once) "ant jar" from the trunk directory should do it.
> >
> > Did it solve the
Thanx for your answer.
I will use the latest version to check this. Unfortunately I have only
access to the computer, where the application will be run, once a week.
And I can't reproduce the error at my local machine or any other
computer I have access to.
So If someone has (better had) the sa
Looks like this was caused by a corrupt Java installation. I was half expecting
to see a comment in the code that said
// Impossible event occurred
Antony
Antony Bowesman wrote:
When adding documents to an index has anyone seen either
java.lang.ClassCastException: org.apache.lucene.analysi
The easiest way to pin this down is to get the backtrace from the
exception, e.g., e.printStackTrace(). That would tell a lot.
That said, prior to 2.1, lucene would put lock files outside the index
directory. I don't know if that's what you're hitting, though, because I
think the writer should hav
"robisbob" <[EMAIL PROTECTED]> wrote:
> i hope someone can help me. If I index a file directory I get the error
> you see here.
> > caught a class java.io.IOException
> > with message: Cannot delete _57e.tis
> > Exception in thread "main" java.io.IOException: Cannot delete _57e.tis
> >
Hi all,
i hope someone can help me. If I index a file directory I get the error
you see here.
caught a class java.io.IOException
with message: Cannot delete _57e.tis
Exception in thread "main" java.io.IOException: Cannot delete _57e.tis
at
org.apache.lucene.store.FSDirectory.deleteFi
what you are doing below is iterating over every term in your index, and
for each Term, recording if that term appears in more then one doc (using
IndexReader.document which is a really bad idea ingeneral in a loop like
this)
your orriginal problem description was " 'Find Duplicate records for
Co
: Date: Mon, 26 Feb 2007 15:17:15 -
: Anybody?
: > Sent: 26 February 2007 13:36
the java-user list is good -- but amazingly enough it's not unheard of for
two *whole* hours to go by without getting a direct response to a
question -- particularly when the last question/answer posted t
I agree, might be a redundant check now. This test was added when the query
parser was enhanced to optionally allow leading wild card (revision 468291
),
but this case calls getWildCardQuery(), not getPrefixQuery().
Still, the check seems harmless - sort of defensive - protecting against
the
case
: OK I'm not sure I understand your answer. I thought TermEnum gave you
: all the terms in an index, not from a search result.
:
: Let me clarify what I need. I'm looking for a way to find out all the
: values of the FIELD_FILTER_LETTER field for any given search.
:
: INDEX TIME: (done for each
Le lundi 26 février 2007 20:12, Chris Lu a écrit :
> Hi, Nicolas,
>
> Just a note: Having one searcher is far enough for ordinary usage,
> even some production sites. But I do see some throughput gain through
> increasing the number of searchers.
>
> Those searchers should either be opened on an st
"jm" <[EMAIL PROTECTED]> wrote:
> You were right. As I have many indexes I keep a cache of the
> IndexWriters, and in some specific case (that cannot happen in my dev
> env) I was closing them without removing them from the cache. Somehow
> it was working before 2.1, and upgrading made the error cl
Hi, Nicolas,
Just a note: Having one searcher is far enough for ordinary usage,
even some production sites. But I do see some throughput gain through
increasing the number of searchers.
Those searchers should either be opened on an static index, or be
synchronized to open/close cleanly, to avoid
Mike,
You were right. As I have many indexes I keep a cache of the
IndexWriters, and in some specific case (that cannot happen in my dev
env) I was closing them without removing them from the cache. Somehow
it was working before 2.1, and upgrading made the error clear.
thanks
javi
On 2/26/07, M
Le lundi 26 février 2007 12:38, Mohammad Norouzi a écrit :
> No. actually I dont close the searcher. I just set a flag to true or false.
> my considerations are:
> [please note I provided a ResultSet so I display the result page by page
> and dont load all the result in a list]
> 1- should I open a
If you can categorize the documents based on user permissions, that is
the route I would go.
For example users 1, 2, and 3 are allowed to search documents a and b.
In addition, user 1 can search documents c and d, while users 2 and 3
can search documents e and f. I would create 3 indexes: o
"jm" <[EMAIL PROTECTED]> wrote:
> I have two processes running in parallel, each one adding and deleting
> to its own set of indexes. Since I upgraded to 2.1 I am getting a NPE
> at RAMDirectory.java line 207 in one of the processes.
>
> Line 207 is:
> RAMFile existing = (RAMFile)fileMap.g
Ok here's some more information on this:
Having done this search:
date1:{-99" + " TO " + Date + "} AND " +
"date2:{" + Date + " TO 99}";
But the problem is that the range search feature here using
Lexicographic and not numeric ranges. Is there a way to use numeric
range
Hi
I got it working before I saw your latest mail, the only problem is
that it doesn't look very efficient. This is my duplicate method, the
problem is that I have to enumerate through *every* term. This was worse
before because I was only interested
in terms that matched a particular field (
Hello all,
I have two processes running in parallel, each one adding and deleting
to its own set of indexes. Since I upgraded to 2.1 I am getting a NPE
at RAMDirectory.java line 207 in one of the processes.
Line 207 is:
RAMFile existing = (RAMFile)fileMap.get(name);
the stack trace is:
java
Greetings,
I'm creating an application that
requires the indexing of millions of documents on behalf of a large group of
users, and was hoping to get an opinion on whether I should use one index per
user or one index per day.
My application will have to handle
the following:
This might help.
http://www.catb.org/~esr/faqs/smart-questions.html
-Original Message-
From: Kainth, Sachin [mailto:[EMAIL PROTECTED]
Sent: Monday, February 26, 2007 10:17 AM
To: java-user@lucene.apache.org
Subject: Date Searches
Anybody?
>
Anybody?
> __
> From: Kainth, Sachin
> Sent: 26 February 2007 13:36
> To: 'java-user@lucene.apache.org'
> Subject: Date searches
>
> Hi all,
>
> I have an index in which dates are represented as ranges of two
> integers (there are two
You'll get no hits since there is no document in the index that has both
fields.
On 2/26/07, Mohammad Norouzi <[EMAIL PROTECTED]> wrote:
Thank you very much Erick.
Really I didnt know about this. I've just thought we can not add two
different documents.
now my question is, if I index the data i
Parse your date with this classe DateTools.stringToDate to search and
DateTools.dateToString() to store into index.
-Original Message-
From: 李寻欢晕菜了 [mailto:[EMAIL PROTECTED]
Sent: 26 February 2007 11:17
To: java-user@lucene.apache.org
Subject: how to query range of Date by given date st
Hi all,
I have an index in which dates are represented as ranges of two integers
(there are two fields one foreach integer). The two integers are years.
AD dates are represented as a positive integer and BC dates as a
negative one There are three possible types of ranges. These are
listed below
Thank you very much Erick.
Really I didnt know about this. I've just thought we can not add two
different documents.
now my question is, if I index the data in the way you said and now I have a
query say, "(table1_name:john) AND (table2_address:Adam street)"
using MultiFieldQueryParser, what is th
Here's an excerpt from something I wrote to enumerate all the terms for a
field. I hacked out some of my tracing, so it may not even compile .
Basically, change the line "if (td.next())" to "while (td.next())" and every
time you stay in that loop for more than one cycle, you'll have duplicate
Well, you need to do two things. First, make sure your dates are indexed so
they can be sorted lexically, which the format you're showing is. You might
want to look at the DateTools class for handy methods of transforming dates
into Lucene-friendly format.
Then use RangeQuery or RangeFilter class
No, that's not what I was talking about. Remember that there's no
requirement in Lucene that each document have the same fields. So, you have
something like...
Document doc = new Document()
doc.add("table1_id", id);
doc.add("table1_name", name);
IndexWriter.add(doc);
Document doc = new Document
No. actually I dont close the searcher. I just set a flag to true or false.
my considerations are:
[please note I provided a ResultSet so I display the result page by page and
dont load all the result in a list]
1- should I open a searcher for each user? and one reader for all user
session?
2- or
Hi,
Sorry I don't see how I get access to TermEnums. So far Ive created a
document per row, the first field holds the row id, then i have one
field per column, and checked the index has been created ok with some
search querys.
I now want to pass a column to check, and receive a list of all
hello:
I have Stored Date in index, and how could I query the result by given range
of Date?
for example:
I would find some matching result in the range of 2007-02-24 to 2007-02-25.
--
--
WoCal生活,尽在掌握!
http://kofwang.wocal.cn
--
34 matches
Mail list logo