Does anybody have any experience with setting up a Lucene RAMDirectory index
for replication across multiple WebSphere servers and taking advantage of
WebSphere's built-in Object Cache? We are currently re-building/refreshing
from the source the entire RAMDirectory index on each WebSphere server
D it will still work.
>
>
> do you have na example of something that *isn't* working the way you want?
> ... if not i don't see what your problem is, all your tests are passing :)
>
>
> : Date: Tue, 5 Sep 2006 14:06:13 -0700 (PDT)
> : From: Philip Brown <[EMAIL
Here's a little sample program (borrowed some code from Erick Erickson :)).
Whether I add as TOKENIZED or UN_TOKENIZED seems to make no difference in
the output. Is this what you'd expect?
- Philip
package com.test;
import java.io.IOException;
import java.util.HashSet;
import java.util.regex.
So, if I do as you suggest below (using PerFieldAnalyzerWrapper with
StandardAnalyzer) then I still need to enclose in quotes the phrases
(keywords with spaces) when I issue the search, and they are only returned
in the results if the case is identical to how it was added? (This seems to
be what
Yeah, they are more complex than the "exactish" match -- basically, there are
more fields involved -- combined sometimes with AND and sometimes with OR,
and sometimes negated field values, sometimes groupings, etc. These other
field values are all single words (no spaces), and a search might invo
Thanks for your input. I'm sure I could do as you suggest (and maybe that
will end up being my best option), but I had hoped to use a string for
creating the query object, particularly as some of my queries are a bit
complex.
Thanks.
Chris Hostetter wrote:
>
>
> I haven't really been followi
h("iamunderscored", 0);
> scratch.doSearch("underscored", 0);
>
> scratch.doSearchPhrase("this is the test text", 1);
> scratch.doSearchPhrase("text with hyphenated-iamhyphenated",
> 1);
> s
t;> and that combines the functionality of
>> LetterTokenizer,
>>
>>
>> LowerCaseTokenizer,
>>
>>
>> WhitespaceTokenizer,
>>
>>
>> StopFilterinto
>>
>>
>> a single efficient multi-purpose class.>>>
>&
arching on other fields.) What do you think?
Philip
Erick Erickson wrote:
>
> OK, I've gotta ask. Have you examined your index with Luke to see if what
> you *think* is in the index actually *is*???
>
> Erick
>
> On 9/1/06, Philip Brown <[EMAIL PROTECTED]> wro
definiton to the TOKEN section, but below that you will
> find the grammer...you need to add to the grammer. If you look
> how
> and are done you will prob see what you should do. If
> not, my machine should be back up tomarrow...
>
> - Mark
>
> On 9/1/06, Philip Brown
he definiton to the TOKEN section, but below that you will
> find the grammer...you need to add to the grammer. If you look
> how
> and are done you will prob see what you should do. If
> not, my machine should be back up tomarrow...
>
> - Mark
>
> On 9/1/06, Philip Brown
Well, I tried that, and it doesn't seem to work still. I would be happy to
zip up the new files, so you can see what I'm using -- maybe you can get it
to work. The first time, I tried building the documents without quotes
surrounding each phrase. Then, I retried by enclosing every phrase within
Thanks, but I don't "think" I need that. But curious, how will it know it's
a phrase if it's not enclosed in quotes? Won't all its terms be treated
separately then?
Philip
Mark Miller-5 wrote:
>
> One more tip...if you would like to be able to search phrases without
> putting in the quotes
Do you mean StandardTokenizer.jj (org.apache.lucene.analysis.standard)? I'm
not seeing StandardAnalyzer.jj in the Lucene source download.
Mark Miller-5 wrote:
>
> Philip Brown w
Hi,
After running some tests using the StandardAnalyzer, and getting 0 results
from the search, I believe I need a special Tokenizer/Analyzer. Does
anybody have something that parses like the following:
- doesn't parse apart phrases (in quotes)
- doesn't parse/separate hyphentated or underscore
ently,
there is more to it than that. Also, this doesn't happen consistently --
just occasionally.
Thanks.
karl wettin-3 wrote:
>
> On Thu, 2006-08-31 at 15:24 -0700, Philip Brown wrote:
>>
>> I'm getting the following error trying to instantiate an IndexMo
I'm getting the following error trying to instantiate an IndexModifier on a
RAMDirectory index:
java.io.IOException: Lock obtain timed out:
[EMAIL PROTECTED]
at org.apache.lucene.store.Lock.obtain(Lock.java(Compiled Code))
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:2
17 matches
Mail list logo