[sorry for the long delay for my answer, we are having some issues with our
mail server...]
Thanks for your comment. Yes it would make sense if the log files were not
so big. In fact, I'm only indexing a subset of the log information.
Because I store the information in Lucene, it is easier and f
The problem is in StandardTokenizer so Analyzer with method:
public TokenStream tokenStream(String fieldName, Reader reader) {
TokenStream result = new LowerCaseTokenizer(reader);
result = new StopFilter(result, stopSet);
return result;
}
if you need everything standard analyzer does
Fr
Rajesh parab wrote:
I am talking about transaction support in Lucene only. If there is a failure
during insert/update/delete of document inside the index, there is no way to
roll back the operation and this will keep the index in in-consistent state.
OK, I see. Then you should also look at
Last I looked at this, I thought that mapping transactions
onto Lucene segments was the way to go.
On Nov 14, 2006, at 11:19 AM, Rajesh parab wrote:
Hi Mike,
Thanks for the feedback.
I am talking about transaction support in Lucene only. If there is
a failure during insert/update/delete of d
Hi Mike,
Thanks for the feedback.
I am talking about transaction support in Lucene only. If there is a failure
during insert/update/delete of document inside the index, there is no way to
roll back the operation and this will keep the index in in-consistent state.
I read about Compass providin
Rajesh parab wrote:
Does anyone know if there is any plan in adding transaction support in Lucene?
I don't know of specific plans.
This has been discussed before on user & dev lists. I know the
Compass project builds transactional support on top of Lucene.
Are you asking for transaction sup
Hi,
Does anyone know if there is any plan in adding transaction support in Lucene?
Regards,
Rajesh
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
Hi Erik,
> SpanFirstQuery is what you're after.
thanks for this hint (@Erick: thanks for the good explanation of my prob),
I read the chapter for the spanfirstquery in LIA, but what I don't
understand is, how do i have to do a "Phrase" SpanFirstQuery?
I found a message with example code (
http:/
That was my first thought as well, but it looks like APOSTROPHE is
already the one that I want. As you can see, from StandardAnalyzer.jj
---
TOKEN : { // token patterns
// basic word: a sequence of digits & letters
||)+ >
// internal ap
What Erik said ...
But I thought I'd add that I was pleasantly surprised by how very fast the
regex contribution went when creating a filter. And you can cache the
filters. Don't be afraid .
But in this case I don't think that would help either. Your basic problem is
probably that you're indexin
Apostrophe is recognized as a part of word - Standard analyzer is mostly
English oriented.
The way is to swap apostrophes - "normal" with unusual.
StandardAnalyzer.java line 40-44
APOSTROPHE:
token = jj_consume_token(APOSTROPHE);
-
Martin,
SpanFirstQuery is what you're after.
Erik
On Nov 14, 2006, at 8:32 AM, Martin Braun wrote:
hi,
i would like to provide a exact "PrefixField Search", i.e. a search
for
exactly the first words in a field.
I think I can't use a PrefixQuery because it would find also
substr
hi,
i would like to provide a exact "PrefixField Search", i.e. a search for
exactly the first words in a field.
I think I can't use a PrefixQuery because it would find also substrings
inside the field, e.g.
action* would find titles like "Action and knowledge" but also (that's
what i don't want it
Well, if none of the regular analyzers meet your needs, you can always roll
your own. There is an example of a SynonymAnanlyzer in Lucene In Action that
you can use as a model. It's not very difficult (although a bit arcane).
You're right, WhitespaceAnalyzer doesn't respect case. But it should wo
Hi all,
I have a specific string query like "Jakarta:" How do i get that? I am
using standardAnalyzer and it seems as if it is stripping ":" and it is
simply searching for "Jakarta".
I have used WhiteSpaceAnalyzer also and its working fine for ":" but I
think it has some other limitations. The
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Like Mike already said and I had also experienced the same problem of
sharing the index accross the NFS, but now I am testing the Lucene with
lockless commit patch and till now I did not get any problem, also I
liked. But I am surely in afavour of havi
On Nov 14, 2006, at 4:08 AM, Heikki Doeleman wrote:
thanks for pointing out these, however neither seems to do exactly
what I
want, i.e. highlight a phrase when a phrase search was done.
A technique I've employed for a client is to convert a general Query
object into a SpanQuery, and creat
man, 13,.11.2006 kl. 12.02 -0500, skrev Michael McCandless:
> The quick answer is: NFS is still problematic in Lucene 2.0.
>
> The longer answer is: we'd like to fix this, but it's not fully fixed
> yet. You can see here:
>
> http://issues.apache.org/jira/browse/LUCENE-673
>
> for gory det
On Nov 14, 2006, at 11:18 AM, Martin Braun wrote:
Hi Erik,
SpanFirstQuery is what you're after.
thanks for this hint (@Erick: thanks for the good explanation of my
prob),
I read the chapter for the spanfirstquery in LIA, but what I don't
understand is, how do i have to do a "Phrase" Span
Hi Mark,
thanks for pointing out these, however neither seems to do exactly what I
want, i.e. highlight a phrase when a phrase search was done.
All of these highlighting solutions seem concerned with selecting "the
best bits" of a document, along with highlighting some parts thereof. To
me thi
20 matches
Mail list logo