Seems like the lucene 2.0.0 created a file /segments. In
2.2.0the new segments file has the following convention
/segments_. Our codebase had some logic that depended on
this file being named consistently.
It seems like the bug was on my end, my apologies.
-M
On 6/21/07, Michael Stoppelman <[EMA
Hi,
I confirm that it is working correctly in lucene 2.2.0 and not working in
2.1.0.
Thanks a lot,
Tanya
-Original Message-
From: Michael Busch [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 21, 2007 7:25 PM
To: java-user@lucene.apache.org
Subject: Re: Problem using RAMDirectory as a
Hi,
I am using lucene 2.1.0. I'll try to download 2.2 and test it.
Thanks,
Tanya
-Original Message-
From: Michael Busch [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 21, 2007 7:25 PM
To: java-user@lucene.apache.org
Subject: Re: Problem using RAMDirectory as a buffer
Tanya Levshina wro
Hi all,
My index is being zeroed out by the new lucene core jar.
Here's the deal:
I've got an old index from lucene-core-2.0.0 jar. I start up my service with
the new lucene 2.2.0 jar and everything
is fine. When I add a document to the index the everything is still fine.
Yet when I shut down my
Tanya Levshina wrote:
> Nope, doesn't work.
> I've tried:
>
> ramWriter.addDocument(doc);
> ramWriter.flush();
> ramWriter.close();
> fsWriter.addIndexes(new Directory[] {ramDir,});
>
>
>
> Any other suggestions?
>
Are you sure? That's strange, I just took your code and tried it out
m
Nope, doesn't work.
I've tried:
ramWriter.addDocument(doc);
ramWriter.flush();
ramWriter.close();
fsWriter.addIndexes(new Directory[] {ramDir,});
Any other suggestions?
Thanks,
Tanya
PS As I said this is just a trivial test I created because it didn't work
for more complicated problem
Hi,
This example is taken almost exactly from "Lucene in Action" page #51.
I would like to use RAMDirectory to add files and then flush it to disk.
I've tried the most trivial example and it doesnt work.
So obviously I am doing something wrong.
Tanya
-Original Message-
From: Daniel Noll
Daniel Noll wrote:
> On Friday 22 June 2007 09:34:44 Tanya Levshina wrote:
>> ramWriter.addDocument(doc);
>>
>> fsWriter.addIndexes(new Directory[] {ramDir,});
>
> As IndexWriter already does this internally, I'm not exactly sure why you're
> trying to implement it again on the
On Friday 22 June 2007 09:34:44 Tanya Levshina wrote:
> ramWriter.addDocument(doc);
>
> fsWriter.addIndexes(new Directory[] {ramDir,});
As IndexWriter already does this internally, I'm not exactly sure why you're
trying to implement it again on the outside.
Daniel
--
Daniel N
Hi,
I am trying to use RAMDirectory as a buffer and am having some problems. I
create indexes using FSDirectory directly and index directory contains the
following files:
bash-3.00$ ls ~/index/
_0.cfs segments_3 segments.gen
When I am trying to use RAMDirectory as a buffer and then add
Hi Mark,
Good summary. I was running some timings earlier and my results echo
your findings.
>>I am currently trying to think of some possible hybrid approach to
highlighting...
I was thinking along the lines of wrapping some core classes such as
IndexReader to somehow observe the query mat
Results of my tests :
The new SpanScorer is just about the same speed as the old Highlighter's
QueryScorer if the Query contains no position sensitive elements. This
is only the case, however, if the CachingTokenFilter (from the analysis
package) is changed to use an ArrayList instead of a Lin
Hi all,
I've got a document that contains a bunch of separate posts about one topic
(a message board thread), all the posts become concatenated together in the
indexed lucene document.
I would like to create highlights and know where the highlight came from,
meaning if the text fragment came from
While we're considering highlighter performance there was some discussion of
this around another implementation here:
http://issues.apache.org/jira/browse/LUCENE-644
Ronnie Kolehmainen's implementation was proven faster than the current contrib
highlighter but was almost certainly missing some
You could also just use TermEnum, something like
TermEnum termEnum = this.reader.getIndexReader().terms(new
Term(field, ""));
Term term = termEnum.term();
while ((term != null) && term.field().equals(field)) {
System.out.println(term.text());
termEnum.
Hi Michael, yes you're right this was a bug in 2.1 that
was (issue LUCENE-813) - so this should work for you
with the latest Lucene version (2.2).
Regards, Doron
Michael Böckling wrote:
> Hi folks!
>
> I'm having a little problem with a specific query that does not
> seem to get
> parsed correctl
Hi Sawan,
Sawan Sharma wrote:
> Now, The problem occured when I passed the multiple words in term query.
> e.g.
> QueryFilter filter = new QueryFilter(new TermQuery(new Term(FieldName,
> FieldValue)));
>
> where field name and field value dynamically getting.
> here we take the example value.
>
Mahdi Rahimi wrote:
> Hi.
>
> How can I access JavaCC??
>
> Thanks
https://javacc.dev.java.net/
--
Steve Rowe
Center for Natural Language Processing
http://www.cnlp.org/tech/lucene.asp
-
To unsubscribe, e-mail: [EMAIL PROTECT
Hi.
How can I access JavaCC??
Thanks
--
View this message in context:
http://www.nabble.com/JavaCC-Download-tf3958940.html#a11233766
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
parser = new QueryParser();
parser.setAllowLeadingWildcard(true);
-Original Message-
From: Martin Spamer [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 21, 2007 7:06 AM
To: java-user@lucene.apache.org
Subject: All keys for a field
I need to return all of the keys for a
Hi folks!
I'm having a little problem with a specific query that does not seem to get
parsed correctly. If a term has both a leading and a trailing * wildcard
character, I get no results in Lucene 2.1.0. It works when I omit one of the
stars, and I activated the option to allow leading wildcards.
I need to return all of the keys for a certain field, essentially
"fieldName:*".This causes a ParseException / lexical error
Encountered: "*" (42), after : ""
I understand why this fails, WildCard prevent this to keep the results
manageble. In my case the number of results will always be man
Thank,
I found it. I wasn't aware of those both source tree.
Kévin.
- Message d'origine
De : Doron Cohen <[EMAIL PROTECTED]>
À : java-user@lucene.apache.org
Envoyé le : Mercredi, 20 Juin 2007, 23h42mn 17s
Objet : Re: The localized Languages.
Hi Kevin, are you looking for the sources u
23 matches
Mail list logo