Filesystem cache usage depends on usage indeed. There are no requirements,
but this is definitely an important performance factor: the more you give
to the fs cache, the better. It is hard to figure out how much is used
specifically for your Lucene indices, but if your server does not run any
other
>From my experience, you must hit some system issue. You should check disk
performance at first, disk queue length on Windows. Or you can enable gc
verbose to know the gc activities in details.
I designed auto upgrade mechanism in application by calling forceMerge(1),
to eradicate hybrid index for
There could be other applications running on the machine with 24 GB memory?
Which would result in total available memory less than what is required. In
this case there may be disk swap, which would take long time.
In theory, if you run this test on machines with memory 50 GB and 100 GB in
this case
It is generally unnecessary to use forceMerge, that's a legacy from
older versions of Lucene/Solr.
Especially if the index is constantly changing, forceMerge generally
is both expensive and not
very useful.
These indexes must be huge though if any of them are taking 8 hours.
What's the background
ginal Message-
> From: Alan Woodward [mailto:a...@flax.co.uk]
> Sent: Montag, 8. Juni 2015 12:23
> To: java-user@lucene.apache.org
> Subject: Re: Memory problem with TermQuery
>
> Hi Anna,
>
> In normal usage, perReaderTermState will be null, and TermQuery will be very
&g
@lucene.apache.org
Subject: Re: Memory problem with TermQuery
Hi Anna,
In normal usage, perReaderTermState will be null, and TermQuery will be very
lightweight. It's in particular expert use cases (generally after queries have
been rewritten against a specific IndexReader) that the perReaderTermState wil
Hi Anna,
In normal usage, perReaderTermState will be null, and TermQuery will be very
lightweight. It's in particular expert use cases (generally after queries have
been rewritten against a specific IndexReader) that the perReaderTermState will
be initialized. Are you cacheing rewritten queri
Philippe Kernévez [pkerne...@octo.com] wrote:
> We use Lucene 2.4 (provided by Alfresco).
Lucene 2.4 is 6 years old. The obvious advice is to upgrade, but I guess you
have your reasons not to.
> We looked at a memory dump with Eclipse Memory Analyser, and we were quite
> surprised to see that mo
On 25/09/2012 20:09, Uwe Schindler wrote:
Hi,
Without a full output of "free -h" we cannot say anything. But the total Linux
memory use should always used by 100% on a good server otherwise it's useless (because
full memory includes cache usage, too). I think, -Xmx may be too less for your Jav
Hi,
Without a full output of "free -h" we cannot say anything. But the total Linux
memory use should always used by 100% on a good server otherwise it's useless
(because full memory includes cache usage, too). I think, -Xmx may be too less
for your Java deployment? We have no information about
On Thu, Aug 2, 2012 at 3:13 AM, Laurent Vaills wrote:
> Hi everyone,
>
> Is there any chance to get his backported for a 3.6.2 ?
>
Hello, I personally have no problem with it: but its really
technically not a bugfix, just an optimization.
It also doesnt solve the actual problem if you have a tom
Hi everyone,
Is there any chance to get his backported for a 3.6.2 ?
Regards,
Laurent
2012/8/2 Simon Willnauer
> On Thu, Aug 2, 2012 at 7:53 AM, roz dev wrote:
> > Thanks Robert for these inputs.
> >
> > Since we do not really Snowball analyzer for this field, we would not use
> > it for now.
http://static1.blip.pl/user_generated/update_pictures/1758685.jpg
On Thu, Aug 2, 2012 at 8:32 AM, roz dev wrote:
> wow!! That was quick.
>
> Thanks a ton.
>
>
> On Wed, Aug 1, 2012 at 11:07 PM, Simon Willnauer
> wrote:
>
>> On Thu, Aug 2, 2012 at 7:53 AM, roz dev wrote:
>> > Thanks Robert for th
wow!! That was quick.
Thanks a ton.
On Wed, Aug 1, 2012 at 11:07 PM, Simon Willnauer
wrote:
> On Thu, Aug 2, 2012 at 7:53 AM, roz dev wrote:
> > Thanks Robert for these inputs.
> >
> > Since we do not really Snowball analyzer for this field, we would not use
> > it for now. If this still does
On Thu, Aug 2, 2012 at 7:53 AM, roz dev wrote:
> Thanks Robert for these inputs.
>
> Since we do not really Snowball analyzer for this field, we would not use
> it for now. If this still does not address our issue, we would tweak thread
> pool as per eks dev suggestion - I am bit hesitant to do th
Thanks Robert for these inputs.
Since we do not really Snowball analyzer for this field, we would not use
it for now. If this still does not address our issue, we would tweak thread
pool as per eks dev suggestion - I am bit hesitant to do this change yet as
we would be reducing thread pool which c
On Tue, Jul 31, 2012 at 2:34 PM, roz dev wrote:
> Hi All
>
> I am using Solr 4 from trunk and using it with Tomcat 6. I am noticing that
> when we are indexing lots of data with 16 concurrent threads, Heap grows
> continuously. It remains high and ultimately most of the stuff ends up
> being moved
This is a progress update on the issue:
I have tried several things and they all gave improvements. In order of
magnitude they are
1) Reduced heap space from 6GB to 3GB.
This on it's own has so far been the biggest win as swapping almost completely
stopped after this step.
2) Began limiting t
Thanks everyone. Looks like I have lots of reading to do :-)
-Original Message-
From: Nader, John P
To: java-user@lucene.apache.org
Sent: Wed, 16 May 2012 16:27
Subject: Re: Memory question
Another good link is
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523
Lutz
>
>-Original Message-
>From: Chris Bamford [mailto:chris.bamf...@talktalk.net]
>Sent: Dienstag, 15. Mai 2012 16:38
>To: java-user@lucene.apache.org
>Subject: Re: Memory question
>
>
> Hi John,
>
>Very interesting, thanks for the detailed explanation. It certainl
java-user@lucene.apache.org
Sent: Tue, 15 May 2012 18:10
Subject: RE: Memory question
It mmaps the files into virtual memory if it runs on a 64 bit JVM. Because
of that you see the mmapped CFS files. This is outside Java Heap and is all
*virtual* no RAM is explicitely occupied except the O/S
rs and then closes them based on how full
>>the heap is getting. My worry is that if the bulk of the memory is being
>>allocated outside the Jvm, how can I make sensible decisions?
>>
>>Thanks for any pointers / info.
>>
>>Chris
>>
>>
>>
>>---
Regards
Lutz
-Original Message-
From: Chris Bamford [mailto:chris.bamf...@talktalk.net]
Sent: Dienstag, 15. Mai 2012 16:38
To: java-user@lucene.apache.org
Subject: Re: Memory question
Hi John,
Very interesting, thanks for the detailed explanation. It certainly
sounds like the same
ar effect ?
Thanks again,
- Chris
-Original Message-
From: Nader, John P
To: java-user@lucene.apache.org
Sent: Tue, 15 May 2012 21:12
Subject: Re: Memory question
We've encountered this issue and came up with a fairly good approach to
address it.
We are on Lucene 3.0.2 with Java 1.6.0
nd then closes them based on how full
>the heap is getting. My worry is that if the bulk of the memory is being
>allocated outside the Jvm, how can I make sensible decisions?
>
>Thanks for any pointers / info.
>
>Chris
>
>
>
>-Original Message-
>From: u...@
om: u...@thetaphi.de
To: java-user@lucene.apache.org
Sent: Tue, 15 May 2012 18:10
Subject: RE: Memory question
It mmaps the files into virtual memory if it runs on a 64 bit JVM. Because
of that you see the mmapped CFS files. This is outside Java Heap and is all
*virtual* no RAM is explicitely occupied excep
It mmaps the files into virtual memory if it runs on a 64 bit JVM. Because
of that you see the mmapped CFS files. This is outside Java Heap and is all
*virtual* no RAM is explicitely occupied except the O/S cache.
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMai
In versions from 3.3 onwards MMapDirectory is the default on 64-bit
linux. Not sure exactly what that means wrt your questions, but may
well be relevant.
--
Ian.
On Tue, May 15, 2012 at 3:51 PM, Lutz Fechner wrote:
> Hi,
>
>
> By design memory outside the JVM heap space should not be accessib
Hi,
By design memory outside the JVM heap space should not be accessible for
java applications.
Why you might see is the disc cache of the Linux storage subsystem.
Best Regards
Lutz
-Original Message-
From: Chris Bamford [mailto:chris.bamf...@talktalk.net]
Sent: Dienstag, 15. Mai 20
On Sat, 2011-09-03 at 20:09 +0200, Michael Bell wrote:
> To be exact, there are about 300 million documents. This is running on a 64
> bit JVM/64 bit OS with 24 GB(!) RAM allocated.
How much memory is allocated to the JVM?
> Now, their searches are working fine IF you do not SORT the results. If
Michael Bell wrote:
> How best to diagnose?
>
>> Call your java process this way
>>java -XX:HeapDumpPath=. -XX:+HeapDumpOnOutOfMemoryError
>> and drag'n'drop the resulting java_pid*.hprof into eclipse.
>> You will get an outline by class for the number and size of allocated
>> objects.
Just lo
On Saturday 03 September 2011 20:09:54 Michael Bell wrote:
> 2011-08-30 13:01:31,489 [TP-Processor8] ERROR
> com.gwava.utils.ServerErrorHandlerStrategy - reportError:
> nastybadthing ::
> com.gwava.indexing.lucene.internal.LuceneSearchController.performSear
>chOperation:229 :: EXCEPTION : java.lang
There is no difference between 2.9 and 3.0, ist exactly the same code with
only Java 5 specific API modifications and removal of deprecated methods.
The issue you have seems to be that maybe your index have grown beyond some
limits of your JVM.
Uwe
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-282
Lucene does not cache results. Operating systems do cache things and
on unix anyway (no idea about windows) some speedups over time can
reasonably be attributed to disk caching by the OS.
Have you profiled your app to find out exactly what is using the
memory? Do you just use the one searcher or
OS level tools (top, ps, activity monitor, task manager) aren't great
ways to measure Java's memory usage, since they only see how much heap
java has allocated from the OS. Within that heap, java can have lots
of free space that it knows about but the OS does not (this is
Runtime.freeMemory()).
Y
On Mon, Mar 8, 2010 at 7:52 PM, Michael McCandless
wrote:
> This was done for performance (to remove alloc/init/GC load).
>
> There are two parts to it -- first, consolidating what used to be lots
> of little objects into shared byte[]/int[] blocks. Second, reusing
> those blocks.
Thanks, just o
On Mon, Mar 8, 2010 at 1:18 PM, Christopher Laux wrote:
> I'm not sure if this is the right list, as it's sort of a development
> question too, but I don't want to bother them over there. Anyway, I'm
> curious as to the reason for using "manual memory management" a la
> ByteBlockPool and consorts
23 sep 2009 kl. 17.55 skrev Mindaugas Žakšauskas:
I was kind of hinting on the resource planning. Every decent
enterprise application, apart from other things, has to provide its
memory requirements, and my point was - if it uses memory, how much of
it needs to be allocated? What are the bounda
23 sep 2009 kl. 17.55 skrev Mindaugas Žakšauskas:
Luke says:
Has deletions? / Optimized? Yes (1614) / No
Very quick response, try optimizing your index and see what happends.
I'll get back to you unless someone beats me to it.
karl
Hi Karl,
On Tue, Sep 22, 2009 at 6:58 PM, Karl Wettin wrote:
> <..> Thing that
> consume the most memory is probably field norms (8 bits per field and
> document unless omitted) and flyweighted terms (String#interal), things you
> can't really do that much about.
I was kind of hinting on the res
Hi Mindaugas,
it is - as you sort of point out - the readers associated with your
searcher that consumes the memory, and not so much the searcher it
self. Thing that consume the most memory is probably field norms (8
bits per field and document unless omitted) and flyweighted terms
(Strin
When implementing your own, it also helps to look at the existing
implementations in the FieldComparator class:
http://svn.apache.org/viewvc/lucene/java/trunk/src/java/org/apache/lucene/search/FieldComparator.java?revision=764551
-Yonik
http://www.lucidimagination.com
On Sat, Jun 13, 2009 at 9:
It's here:
http://lucene.apache.org/java/docs/nightly/
But remember this is trunk code, ie not yet released, so stuff is
still changing.
Mike
On Sat, Jun 13, 2009 at 9:30 AM, Marc Sturlese wrote:
>
> Thanks Mike, really useful info. I have dowloaded the latest Lucene 2.9-dev
> to test the imp
Thanks Mike, really useful info. I have dowloaded the latest Lucene 2.9-dev
to test the implementation of a FieldComparatorSource but the API
documentation doesn't seem to be availabe.
I can access to the class MissingStringLastComparatorSource:
http://lucene.apache.org/solr/api/org/apache/solr/
On Fri, Jun 12, 2009 at 6:09 PM, Marc Sturlese wrote:
> I have noticed I am experiencing sort of a memory leak with a
> CustomComparatorSource (wich implements SortComparatorSource).
> I have a HashMap declared as variable of class in CustomComparatorSource:
This is unfortunately a known and rath
OK thanks for bringing closure.
Mike
On Thu, Mar 26, 2009 at 8:37 AM, Chetan Shah wrote:
>
> Ok. I was able to conclude that the I am getting OOME due to my usage of HTML
> Parser to get the HTML title and HTML text. I display 10 results per page
> and therefore end up calling the org.apache.luc
Ok. I was able to conclude that the I am getting OOME due to my usage of HTML
Parser to get the HTML title and HTML text. I display 10 results per page
and therefore end up calling the org.apache.lucene.demo.html.HTMLParser 10
times.
I modified my code to store the title and html summary in the
No, I don't hit OOME if I comment out the call to getHTMLTitle. The
heap
behaves perfectly.
I completely agree with you, the thread count goes haywire the
moment I call
the HTMLParser.getTitle(). I have seen a thread count of like 600
before my
I hit OOME (with the getTitle() call on) and
Actually, I was hoping you could try leaving the getHTML calls in, but
increase the heap size of your Tomcat instance.
Ie, to be sure there really is a leak vs you're just not giving the
JRE enough memory.
I do like your hypothesis, but looking at HTMLParser it seems like the
thread should exit a
Highly appreciate your replies Michael.
No, I don't hit OOME if I comment out the call to getHTMLTitle. The heap
behaves perfectly.
I completely agree with you, the thread count goes haywire the moment I call
the HTMLParser.getTitle(). I have seen a thread count of like 600 before my
I hit OOME
Odd. I don't know of any memory leaks w/ the demo HTMLParser, hmm
though it's doing some fairly scary stuff in its getReader() method.
EG it spawns a new thread every time you run it. And, it's parsing
the entire HTML document even though you only want the title.
You may want to switch to better
After some more researching I discovered that the following code snippet
seems to be the culprit. I have to call this to get the "title" of the
indexed html page. And this is called 10 times as my I display 10 results on
a page.
Any Suggestions on how to achieve this without the OOME issue.
Is there anything else in this JRE?
65 MB ought to be plenty for what you are trying to do w/ just Lucene,
I think.
Though to differentiate whether "you are not giving enough RAM to
Lucene" vs "you truly have a memory leak", you should try increasing
the heap size to something absurdly big (256
I am using the default heap size which according to Netbeans is around 65MB.
If the RAM directory was not initialized correctly, how am I getting valid
search results? I am able to execute searches for quite some time before I
get OOME.
Makes Sense? Or Maybe I am missing something, please let m
Perhaps this is a simple question, but looking at your stack trace, I'm
not seeing where it was set during the tomcat initialization, so here goes:
Are you setting up the jvm's heap size during your Tomcat initialization
somewhere?
If not, that very well could be part of your issue, as the st
The stack trace is attached.
http://www.nabble.com/file/p22667542/dump dump
The file size of
_30.cfx - 1462KB
_32.cfs - 3432KB
_30.cfs - 645KB
Michael McCandless-2 wrote:
>
>
> Hmm... after how many queries do you see the crash?
>
> Can you post the full OOME stack trace?
>
> You're
Hmm... after how many queries do you see the crash?
Can you post the full OOME stack trace?
You're using a RAMDirectory to hold the entire index... how large is
your index?
Mike
Chetan Shah wrote:
After reading this forum post :
http://www.nabble.com/Lucene-Memory-Leak-tt19276999.html#a
After reading this forum post :
http://www.nabble.com/Lucene-Memory-Leak-tt19276999.html#a19364866
I created a Singleton For Standard Analyzer too. But the problem still
persists.
I have 2 singletons now. 1 for Standard Analyzer and other for
IndexSearcher.
The code is as follows :
package w
No, I have a singleton from where I get my searcher and it is kept through
out the application.
Michael McCandless-2 wrote:
>
>
> Are you not closing the IndexSearcher?
>
> Mike
>
> Chetan Shah wrote:
>
>>
>> I am initiating a simple search and after profiling the my
>> application using
Are you not closing the IndexSearcher?
Mike
Chetan Shah wrote:
I am initiating a simple search and after profiling the my
application using
NetBeans. I see a constant heap consumption and eventually a server
(tomcat)
crash due to "out of memory" error. The thread count also keeps on
inc
On Mar 12, 2009, at 10:47 AM, Niels Ott wrote:
Michael McCandless schrieb:
When RAM is full, IW flushes the pending changes to disk, but does
not commit them, meaning external (newly opened or reopened)
readers will not see the changes.
Is there a built-in mechanism in the IndexReader to
Michael McCandless schrieb:
When RAM is full, IW flushes the pending changes to disk, but does not
commit them, meaning external (newly opened or reopened) readers will
not see the changes.
Is there a built-in mechanism in the IndexReader to reload the index
every now and then, after having c
Niels Ott wrote:
Hi Mark,
markharw00d schrieb:
Hi Niels,
See the javadocs for IndexWriter.setRAMBufferSizeMB()
I tried different settings. Apart from the fact that my memory issue
seems to by my own fault, I'm wondering what Lucene does in the
background. Apparently it does flush(), but
Hi Mark,
markharw00d schrieb:
Hi Niels,
See the javadocs for IndexWriter.setRAMBufferSizeMB()
I tried different settings. Apart from the fact that my memory issue
seems to by my own fault, I'm wondering what Lucene does in the
background. Apparently it does flush(), but not commit()?
At le
Hi Niels,
See the javadocs for IndexWriter.setRAMBufferSizeMB()
Cheers
Mark
Niels Ott wrote:
Hi Lucene professionals!
This may sound like a dumb beginner's question, but anyways: Can
Lucene run out of memory during indexing?
Should I use IndexWriter.flush() or .commit(), and if so, how ofte
chanchitodata wrote:
I actually dont hit OOM, The memory gets 100% full and the JVM hangs.
Is it GC'ing during this hang? Can you try reducing your heap size
down alot and see if the GC runs faster? (Or, if you can provoke an
OOM).
How large is your heap now?
Independently what type
Hi Michael,
I actually dont hit OOM, The memory gets 100% full and the JVM hangs.
Independently what type of GC alogorithm I use. Have tried all sorts of JVM
GC flags.
Profiling the application with YourKit I can see that the TermInfo instances
does not get freed up when the GC is done.
The appl
Your index has relatively few terms: ~13 million.
Lucene stores TermInfo instances in two places. The first place is a
persistent array, called the terms index, of every 128th term. It's
created when the IndexReader is first opened. So in your case this is
~100.000 ("100 thousand") instances.
Hi Michael,
I´m pretty sure that the IndexReaders are being closed. As I said I use
Compass and compass handles all the IndexReader stuff for me. I have
discussed this issue with Shay Banon for a while in the Compass forum and
he was the guy that lead me to this forum after several diferents tes
Are you certain that old IndexReaders are being closed?
If you are not using CFS file format, how large are your *.tii files?
If you are using CFS file format, can you run CheckIndex on your index
and post the output? This way we can see how many terms are in the
index (which is what get
That's also why your app runs so slowly, opening an IndexReader
is a very expensive operation, doing it for every doc is exceedingly
bad...
Best
Erick
On Wed, Nov 5, 2008 at 3:21 PM, bruno da silva <[EMAIL PROTECTED]> wrote:
> Hello Marc
> I'd suggest you create the IndexSearcher outside of your
Hello Marc
I'd suggest you create the IndexSearcher outside of your method and pass the
indexreader as a parameter ...
like : private Document getDocumentData(IndexReader reader, String id)
you don't have a memory leak you have a intensive use of memory..
On Wed, Nov 5, 2008 at 3:11 PM, Marc S
Are you opening/closing your searcher and writer for each document?
If so, it sounds like you're not closing all of them appropriately and
that would be the cause of your memory increase. But you shouldn't
have to do that anyway. Why not just use the same IndexReader to
search and delete all your d
will try them instead of my own GC thread to
> see whether the problem can also be solved.
>
> Thanks Brian!
>
> Regards,
> Gong
>
> > -Original Message-
> > From: Beard, Brian [mailto:[EMAIL PROTECTED]
> > Sent: Monday, October 06, 2008 8:48
Gong
> -Original Message-
> From: Beard, Brian [mailto:[EMAIL PROTECTED]
> Sent: Monday, October 06, 2008 8:48 PM
> To: java-user@lucene.apache.org
> Subject: RE: Memory eaten up by String, Term and TermInfo?
>
> I played around with GC quite a bit in our app and
hotspot/gc/index.jsp
-Original Message-
From: Peter Cheng [mailto:[EMAIL PROTECTED]
Sent: Sunday, October 05, 2008 7:55 AM
To: java-user@lucene.apache.org
Subject: RE: Memory eaten up by String, Term and TermInfo?
I have confirmed that the OutOfMemoryError is not Lucene's problem. It
essage-
> From: Michael McCandless [mailto:[EMAIL PROTECTED]
> Sent: Sunday, September 14, 2008 10:28 PM
> To: java-user@lucene.apache.org
> Subject: Re: Memory eaten up by String, Term and TermInfo?
>
>
> Small correction: it was checked in this morning (at least, on the
I'll try later and report back ASAP. You know, it takes days to cause OOM.
Thank you all!
Gong
> -Original Message-
> From: Michael McCandless [mailto:[EMAIL PROTECTED]
> Sent: Sunday, September 14, 2008 10:28 PM
> To: java-user@lucene.apache.org
> Subject: Re: Memor
Small correction: it was checked in this morning (at least, on the
East Coast of the US).
So you need to either build your own JAR using Lucene's trunk, or,
wait for tonite's build to run and then download the build artifacts
from here:
http://hudson.zones.apache.org/hudson/job/Luce
Can you try to update to the latest Lucene svn version, like yesterday?
LUCENE-1383 was checked in yesterday. This patch is addressing a leak
problem particular to J2EE applications.
--
Chris Lu
-
Instant Scalable Full-Text Search On Any Database/Application
site: http://w
Thanks for the link! I will post the problem there.
In the mean time, any J2EE application developers should know this problem
and try to avoid Lucene checked out on or after May 23,2008, svn version
659602.
I tried svn 659601, which worked fine.
I will follow up on this email list when the proble
Just chipping in that I recall there being a number of discussions on
java-dev about ThreadLocal and web containers and how they should be
handled. Not sure if it pertains here or not, but you might find http://lucene.markmail.org/message/keosgz2c2yjc7qre?q=ThreadLocal
helpful.
You might a
Can you post the Python sources of the Lucene part of your application?
One thing to check is how the JRE is being instantiated from Python,
ie, what the equivalent setting is for -Xmx (= max heap size). It's
possible the 140 MB consumption is actually "OK" as far as the JRE is
concerned,
Thanks very much for this; I'll give it a shot.
Keith.
On 4 Jul 2008, at 00:02, Paul Smith wrote:
(there are around 6,000,000 posts on the message board database)
Date encoded as yyMMdd: appears to be using around 30M
Date encoded as yyMMddHHmmss: appears to be using more than 400M!
I g
(there are around 6,000,000 posts on the message board database)
Date encoded as yyMMdd: appears to be using around 30M
Date encoded as yyMMddHHmmss: appears to be using more than 400M!
I guess I would have understood if I was seeing the usage double for
sure, or even a little more; no idea
Hi Ethan,
Yes, it would be good to have this in JIRA. Please see
http://wiki.apache.org/lucene-java/HowToContribute for info about generating
the patch, etc.
Thanks,
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Ethan Tao <[EMAIL PROTEC
Thanks, the link was helpful. I'll let you know if I find anything.
Thanks for all the replies to this.
Steve
Doron Cohen wrote:
Stephen Gray wrote:
Thanks. If the extra memory allocated is native memory I don't think
jconsole includes it in "non-heap" as it doesn't show this as
increasin
Stephen Gray wrote:
> Thanks. If the extra memory allocated is native memory I don't think
> jconsole includes it in "non-heap" as it doesn't show this as
> increasing, and jmap/jhat just dump/analyse the heap. Do you know of an
> application that can report native memory usage?
Sorry, but I didn
I actually had to deal with a leak in non-heap native memory once. I am
running on Linux so I just use good old "ps" to monitor native memory usage.
Bill
On 5/18/07, Stephen Gray <[EMAIL PROTECTED]> wrote:
Thanks. If the extra memory allocated is native memory I don't think
jconsole includes
Thanks. If the extra memory allocated is native memory I don't think
jconsole includes it in "non-heap" as it doesn't show this as
increasing, and jmap/jhat just dump/analyse the heap. Do you know of an
application that can report native memory usage?
Thanks,
Steve
Doron Cohen wrote:
Stephen
Stephen Gray <[EMAIL PROTECTED]> wrote on 17/05/2007 22:40:01:
> One interesting thing is that although the memory allocated as
> reported by the processes tab of Windows Task Manager goes up and up,
> and the JVM eventually crashes with an OutOfMemory error, the total size
> of heap + non-heap as
Hi Otis,
Thanks very much for your reply.
I've removed the LuceneIndexAccessor code, and still have the same
problem, so that at least rules out LuceneIndexAccessor as the source.
maxBufferedDocs is just set to the default, which I believe is 10.
I've tried jconsole, + jmap/jhat for looking
Hi Steve,
You said the OOM happens only when you are indexing. You don't need
LuceneIndexAccess for that, so get rid of that to avoid one suspect that is not
part of Lucene core. What is your maxBufferedDocs set to? And since you are
using JVM 1.6, check out jmap, jconsole & friends, they'll
Daniel Noll wrote:
On Tuesday 15 May 2007 21:59:31 Narednra Singh Panwar wrote:
try using -Xmx option with your Application. and specify maximum/ minimum
memory for your Application.
It's funny how a lot of people instantly suggest this. What if it isn't
possible? There was a situation a wh
Thanks, that narrows it down a bit.
Thanks for all the replies to my question.
Steve
Mark Miller wrote:
I don't have much help to offer other than to say I am also using a
tweaked version of the IndexAccess code you are, with java 1.6, with
hundreds of thousands to millions of docs, at multip
On Tuesday 15 May 2007 21:59:31 Narednra Singh Panwar wrote:
> try using -Xmx option with your Application. and specify maximum/ minimum
> memory for your Application.
It's funny how a lot of people instantly suggest this. What if it isn't
possible? There was a situation a while back where I sa
try using -Xmx option with your Application. and specify maximum/ minimum
memory for your Application.
Hope this will solve you problem.
On 5/15/07, Stephen Gray <[EMAIL PROTECTED]> wrote:
Hi everyone,
I have an application that indexes/searches xml documents using Lucene.
I'm having a prob
I don't have much help to offer other than to say I am also using a
tweaked version of the IndexAccess code you are, with java 1.6, with
hundreds of thousands to millions of docs, at multiple locations, for
months -- and I have not seen any memory leaks. Leads me to think the
leak may be with y
I'm searching a 20GB index and my searching JVM is allocated 1Gig.
However, my indexing app only had 384mb availible to it, which means you
can get away with far less. I believe certain index tables will need to
be swapped in and out of memory though so it may not search as quickly.
With a 1.
When your app gets a java.lang.OutOfMemory exception.
--
Ian.
On 3/14/07, Dennis Berger <[EMAIL PROTECTED]> wrote:
Ian Lea schrieb:
> No, you don't need 1.8Gb of memory. Start with default and raise if
> you need to?
how do I know when I need it?
> Or jump straight in at about 512Mb.
>
>
> -
1 - 100 of 132 matches
Mail list logo