Hi,
We have installed solr/jetty server on a linux box. Occasionally, the Solr
server will use 100% CPU time
and several GB of RAM and do not response to the requests (and cause the apache
server to crash)..
Any suggestions for how to solve this problem? - change the jetty or solr
configura
oh ,thx,I don't know CheckIndex before...and I use to fix my error index,it
is OK...
I use NFS to share my index,and no change to the LogFactory.
How could I avoid this problem,and not only fix after it was broken
suddenly?
--
View this message in context:
http://lucene.472066.n3.nabble.com/rea
I was trying to play with this. Am I correct in assuming that this isn't going
to work with the StandardTokenizer (since it appears to strip angle brackets
among other things)? Does HTMLStripCharFilter expect a WhiteSpaceTokenizer or
a CharacterTokenizer or ??
If I want to get rid of punctu
I was doing some tokenizer/filter analysis attempting to fix a bug I have in
highlighting under 4.0. I was running the displayTokensWithFullDetails code
from LIA2. I would get an exception with a bad index value of -1.
I fixed the problem by doing a reset() immediately after creating my
Token
thank you :)
On 11/1/2012 4:45 PM, Robert Muir wrote:
this is intentional (since you have a bug in your code).
you need to call reset(): see the tokenstream contract, step 2:
http://lucene.apache.org/core/4_0_0/core/org/apache/lucene/analysis/TokenStream.html
On Thu, Nov 1, 2012 at 7:31 PM, I
this is intentional (since you have a bug in your code).
you need to call reset(): see the tokenstream contract, step 2:
http://lucene.apache.org/core/4_0_0/core/org/apache/lucene/analysis/TokenStream.html
On Thu, Nov 1, 2012 at 7:31 PM, Igal @ getRailo.org wrote:
> I'm trying to write a very si
I'm trying to write a very simple method to show the different tokens
that come out of a tokenizer. when I call WhitespaceTokenizer's (or
LetterTokenizer's) incrementToken() method though I get an
ArrayIndexOutOfBoundsException (see below)
any ideas?
p.s. if I use StandardTokenizer it works
That would be great!
Hmm which version are you looking at? In 4.0 it currently says this:
/**
* Close this ReferenceManager to future {@link #acquire() acquiring}. Any
* references that were previously {@link #acquire() acquired} won't be
* affected, and they should still be {@link #r
hey michael,
On Thu, Nov 1, 2012 at 11:30 PM, Michael-O <1983-01...@gmx.net> wrote:
> Thanks for the quick response. Any chance this could be clearer in the
> JavaDoc of this class?
sure thing, do you wanna open an issues / create a patch I am happy to
commit it.
simon
>
>> Call it when you kno
Thanks for the quick response. Any chance this could be clearer in the JavaDoc
of this class?
> Call it when you know you'll no longer need to call .acquire on it
> anymore (typically this would be when your webapp is being destroyed).
>
> Really all this does is drop its internal reference to t
Call it when you know you'll no longer need to call .acquire on it
anymore (typically this would be when your webapp is being destroyed).
Really all this does is drop its internal reference to the current searcher.
Any in-flight searches still outstanding will run fine (and they
should call .rele
Hi folks,
while I do understand the workflow with this class, I do not understand when to
call close(). The JavaDoc is not crystal clear on that. Am I supposed to call
this method after release() or when my webapp is destroyed?
Thanks,
Michael
-
hi Steve,
you are correct. I am using StandardTokenizer. I will look into the
WhitespaceTokenizer and hopefully figure it out.
thank you,
Igal
On 11/1/2012 1:24 PM, Steve Rowe wrote:
Hi Igal,
You didn't say you were using StandardTokenizer, but assuming you are, right
now StandardToke
Hi Igal,
You didn't say you were using StandardTokenizer, but assuming you are, right
now StandardTokenizer throws away punctuation, so no following filters will see
them.
If StandardTokenizer were modified to also output currently non-tokenized
punctuation as tokens, then you could use a Filt
thank you. I found it at
org.apache.lucene.analysis.util.FilteringTokenFilter
Igal
On 11/1/2012 12:51 PM, Uwe Schindler wrote:
The filter is still there. In Lucene 4.0 all tokenstream implementations are in
a separate module, no longer in Lucene core. The package names of most analysis
We are still having the issue where ComplexPhraseQueryParser fails on
quoted expressions that include stop words. Does the original
developer of this class still contribute to Lucene?
On Fri, Oct 26, 2012 at 3:37 PM, Brandon Mintern wrote:
> We recently switched from QueryParser to ComplexPhraseQ
The filter is still there. In Lucene 4.0 all tokenstream implementations are in
a separate module, no longer in Lucene core. The package names of most analysis
components changed, too.
Use your IDE to find it or ask Google...
Uwe
"Igal @ getRailo.org" schrieb:
>hi,
>
>I'm trying to migrate
hi,
I'm trying to migrate to Lucene 4.
in Lucene 3.5 I extended org.apache.lucene.analysis.FilteringTokenFilter
and overrode accept() to remove undesired shingles. in Lucene 4
org.apache.lucene.analysis.FilteringTokenFilter does not exist?
I'm trying to achieve two things:
1) remove shingl
On 01.11.2012 г. 15:09, Michael McCandless wrote:
On Thu, Nov 1, 2012 at 6:11 AM, Ivan Vasilev wrote:
Hy Guys,
I intend to extend DocumentStoredFieldVisitor class like this:
class DocumentStoredNonRepeatableFieldVisitor extends
DocumentStoredFieldVisitor {
@Override
public St
On Thu, Nov 1, 2012 at 6:11 AM, Ivan Vasilev wrote:
> Hy Guys,
>
> I intend to extend DocumentStoredFieldVisitor class like this:
>
> class DocumentStoredNonRepeatableFieldVisitor extends
> DocumentStoredFieldVisitor {
>
> @Override
> public Status needsField(FieldInfo fieldInfo) throw
Well somehow you have a corrupt index. I'd be very interested in how
that happened :)
You need to run CheckIndex with -fix (it's a command-line tool, too)
to remove that bad segment else some search will eventually hit an
exception on that segment ... and merging can never be done with that
segme
Hy Guys,
I intend to extend DocumentStoredFieldVisitor class like this:
class DocumentStoredNonRepeatableFieldVisitor extends
DocumentStoredFieldVisitor {
@Override
public Status needsField(FieldInfo fieldInfo) throws IOException {
return fieldsToAdd == null || fieldsToAdd
22 matches
Mail list logo