Hello,
I'm using Lucene 2.2.0. I've got a query class that wraps an
IndexSearcher object. Right now, we create a new IndexSearcher each
time my query class gets instantiated and then it gets used throughout
the life of the query class. Multiple queries get made against the
IndexSearcher object
WildcardQuery(new Term("column_name",
"termToSearch*"));
partialMatchQuery.setBoost(100);
When I deployed the application on a tomcat instance with 512MB RAM. First
it threw an exception about too many clauses, so I set the Max clause count
to Integer max. Then it threw an out of memory
>>
>>
>> Query callDetailquery =
>> parser.parse(searchQuery);
>>
>> hits = is.searc
;>>>>>>> Search.it is used
>>>>>>>> >>>>> for all other searches.
>>>>>>>> >>>>>
>>>>>>>> >>>>> 5.I am using Lucene 2.2.0 version, in a WEB
>>>>>>>> Application
>>>>>>>> >>>>> which index 15 fields per document and Ind
gt;> >>>>> it fetches 600 records from the index store(5 direcories)
>>>>>>> >>>>>
>>>>>>> >>&g
>>>>>> >>>>>
>>>>>> >>>>> hossman wrote:
>>>>>> >>>>>>
>>>>>> >>>>>>
>>>>>> >>>>>> : I set IndexSearcher as the application Object after the
>>>>>> first
t;>
>>>>>> >>>>>> : I set IndexSearcher as the application Object after the
>>>>>> first
>>>>>> >>>>>> search.
>>>>>> >>>>>> ...
>>>>>> >&
;>> search.
>>>>> >>>>>>...
>>>>> >>>>>> : how can i reconstruct the new IndexSearcher for every hour to
>>>>> see
>>>>> >>>>>> the
>>>>> >>>>>> : updated records .
>>>>> >>>>>>
&g
>>>>>> i'm confused ... my understanding based on the comments you made
>>>> >>>>>> below
>>>> >>>>>> (in an earlier message) was that you already *were* constructing
>>>> a
>>>> >>>>>> new
>>>> >>>>&
t;> new
>>> >>>>>> IndexSearcher once an hour -- but every time you do that, your
>>> memory
>>> >>>>>> usage grows, and and that sometimes you got OOM Errors.
>>> >>>>>>
>>> >>>>>> if that's not what you s
;>> if that's not what you said, then i think you need to explain, in
>> >>>>>> detail,
>> >>>>>> in one message, exactly what your problem is. And don't assume we
>> >>>>>> understand anything -- tell us *EVERYTHING* (like, f
gt;>>>
>>>>>>>> if that's not what you said, then i think you need to explain, in
>>>>>>>> detail,
>>>>>>>> in one message, exactly what your problem is. And don't assume we
>>>>>>>&
oblem is. And don't assume we
>>>>>>> understand anything -- tell us *EVERYTHING* (like, for example, what
>>>>>>> the
>>>>>>> word "crore" means, how "searcherOne" is implemented, and the answer
>>>>&
(like, for example, what
> >>>>>> the
> >>>>>> word "crore" means, how "searcherOne" is implemented, and the answer
> >>>>>> to
> >>>>>> the specfic question i asked in my last message: does yo
re" means, how "searcherOne" is implemented, and the answer
>>>>>> to
>>>>>> the specfic question i asked in my last message: does your
>>>>>> application,
>>>>>> contain anywhere in it, any code that will close anyt
essage: does your
>>>>> application,
>>>>> contain anywhere in it, any code that will close anything
>>>>> (IndexSearchers
>>>>> or IndexReaders) ?
>>>>>
>>>>>
>>>>> : > : I use StandardAnalyzer.the records daily ranges from 5 crore to
> one
>>>> : > : time for all the searches. for an hour can i see the updated
>>>> records in
>>>> : > the
>>>> : > : indexstore by reinstantiating IndexSearcher object.but the
>>>> problem when
>>>> : > i
>>>
: > : indexstore by reinstantiating IndexSearcher object.but the problem
>>> when
>>> : > i
>>> : > : reinstantiate IndexSearcher ,my RAM memory gets appended.is there
>>> any
>>>
>>>
>>> : > IndexSearcher are you explicitly closing both
antiating IndexSearcher object.but the problem
>> when
>> : > i
>> : > : reinstantiate IndexSearcher ,my RAM memory gets appended.is there
>> any
>>
>>
>> : > IndexSearcher are you explicitly closing both the old IndexSearcher
>> as
>> :
t;
>
>
> -Hoss
>
>
> -------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>
>
--
View this message in context:
http://www.nabble.com/Java-Heap-
nded.is there any
>>
>> skimming hte code below, you are opening an IndexSearcher over a
>> MultiReader over 4 seperate IndexReaders ... when you instantiate a new
>> IndexSearcher are you explicitly closing both the old IndexSearcher as
>> well as all of 4 o
: I set IndexSearcher as the application Object after the first search.
...
: how can i reconstruct the new IndexSearcher for every hour to see the
: updated records .
i'm confused ... my understanding based on the comments you made below
(in an earlier message) was that you already *wer
the MultiReader?
>
> closing an IndexSearcher will only close the underlying Reader if it
> opened it .. and a MultiReader constructed from other IndexReaders will
> never close them.
>
> -Hoss
>
>
> -------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional co
: I use StandardAnalyzer.the records daily ranges from 5 crore to 6 crore. for
: every second i am updating my Index. i instantiate IndexSearcher object one
: time for all the searches. for an hour can i see the updated records in the
: indexstore by reinstantiating IndexSearcher object.but the pr
IndexReader indexSource4 =
>>>>>> IndexReader.open(indexDir4);
>>>>>>
>>>>>>
>>>>>>
cher(mergedReader);
>>>>>
>>>>>
>>>>> QueryParser parser =
>>>>> new QueryParser("contents" ,new
>>>>> St
yzer());
>>>>
>>>>
>>>> String searchQuery=
>>>> new
>>>> StringBuffer().append(inputNo).append(&quo
>>>
>>>
>>> Query callDetailquery =
>>> parser.parse(searchQuery);
>&g
>>
>> Query callDetailquery =
>> parser.parse(searchQuery);
>>
>> hits = is.search(callDetailquery);
>>
>>
>>
> hits = is.search(callDetailquery);
>
>
> it takes 300 MB of RAM for every search and it is very very slow is there
> any other way to control the Memory and to make search faster.i use
> SINGLETON to use the IndexSearc
ery search and it is very very slow is there
> any other way to control the Memory and to make search faster.i use
> SINGLETON to use the IndexSearcher as a one time used object for all the
> instances.
>
--
View this message in context:
http://www.nabble.com/Java-Heap-Space--Ou
callDetailquery);
it takes 300 MB of RAM for every search and it is very very slow is there
any other way to control the Memory and to make search faster.i use
SINGLETON to use the IndexSearcher as a one time used object for all the
instances.
--
View this message in context:
http://www.nabble.
[mailto:[EMAIL PROTECTED]
Sent: 13 July 2006 15:30
To: java-user@lucene.apache.org
Subject: Re: Out of memory error
Thanks.
I am using the getText(PDDocument) method of the PDFTextStripper. I will try
the other suggestion.
suba suresh.
Rob Staveley (Tom) wrote:
If you are using
http
2006 15:30
To: java-user@lucene.apache.org
Subject: Re: Out of memory error
Thanks.
I am using the getText(PDDocument) method of the PDFTextStripper. I will try
the other suggestion.
suba suresh.
Rob Staveley (Tom) wrote:
If you are using
http://www.pdfbox.org/javadoc/org/pdfbox/util
Let us know how you get on. There are a lot of people fighting very similar
battles on this list.
-Original Message-
From: Suba Suresh [mailto:[EMAIL PROTECTED]
Sent: 13 July 2006 15:30
To: java-user@lucene.apache.org
Subject: Re: Out of memory error
Thanks.
I am using the getText
is still giving you an out of memory error then it is possibly
an issue with PDFBox, if that is the case then please create an issue
and attach/upload the PDF on the PDFBox site.
Ben
> Thanks.
>
> I am using the getText(PDDocument) method of the PDFTextStripper. I
will
> t
eld.Store,%20org.apache.lucene.document.Field.Index)).
-Original Message-
From: Suba Suresh [mailto:[EMAIL PROTECTED]
Sent: 13 July 2006 14:55
To: java-user@lucene.apache.org
Subject: Out of memory error
I am indexing different document formats with lucene 1.9. One of the pdf
file I am
: java-user@lucene.apache.org
Subject: Out of memory error
I am indexing different document formats with lucene 1.9. One of the pdf
file I am indexing is 300MG. Whenever the index writer hits that file it
stops the indexing with "Out of Memory" exception. I am using the pdf box
library to ind
I am indexing different document formats with lucene 1.9. One of the pdf
file I am indexing is 300MG. Whenever the index writer hits that file it
stops the indexing with "Out of Memory" exception. I am using the pdf
box library to index. I have set the following merge factors in my code.
write
39 matches
Mail list logo