Re: Memory footprint of individual indices at runtime

2017-06-05 Thread Adrien Grand
Filesystem cache usage depends on usage indeed. There are no requirements, but this is definitely an important performance factor: the more you give to the fs cache, the better. It is hard to figure out how much is used specifically for your Lucene indices, but if your server does not run any other

Re: Re: memory cost in forceMerge(1)

2015-08-11 Thread Duke DAI
>From my experience, you must hit some system issue. You should check disk performance at first, disk queue length on Windows. Or you can enable gc verbose to know the gc activities in details. I designed auto upgrade mechanism in application by calling forceMerge(1), to eradicate hybrid index for

Re: Re: memory cost in forceMerge(1)

2015-08-11 Thread Phaneendra N
There could be other applications running on the machine with 24 GB memory? Which would result in total available memory less than what is required. In this case there may be disk swap, which would take long time. In theory, if you run this test on machines with memory 50 GB and 100 GB in this case

Re: memory cost in forceMerge(1)

2015-08-10 Thread Erick Erickson
It is generally unnecessary to use forceMerge, that's a legacy from older versions of Lucene/Solr. Especially if the index is constantly changing, forceMerge generally is both expensive and not very useful. These indexes must be huge though if any of them are taking 8 hours. What's the background

Re: Memory problem with TermQuery

2015-06-08 Thread Alan Woodward
ginal Message- > From: Alan Woodward [mailto:a...@flax.co.uk] > Sent: Montag, 8. Juni 2015 12:23 > To: java-user@lucene.apache.org > Subject: Re: Memory problem with TermQuery > > Hi Anna, > > In normal usage, perReaderTermState will be null, and TermQuery will be very &g

RE: Memory problem with TermQuery

2015-06-08 Thread Anna Maier
@lucene.apache.org Subject: Re: Memory problem with TermQuery Hi Anna, In normal usage, perReaderTermState will be null, and TermQuery will be very lightweight. It's in particular expert use cases (generally after queries have been rewritten against a specific IndexReader) that the perReaderTermState wil

Re: Memory problem with TermQuery

2015-06-08 Thread Alan Woodward
Hi Anna, In normal usage, perReaderTermState will be null, and TermQuery will be very lightweight. It's in particular expert use cases (generally after queries have been rewritten against a specific IndexReader) that the perReaderTermState will be initialized. Are you cacheing rewritten queri

RE: Memory consumption on lucene 2.4

2014-11-21 Thread Toke Eskildsen
Philippe Kernévez [pkerne...@octo.com] wrote: > We use Lucene 2.4 (provided by Alfresco). Lucene 2.4 is 6 years old. The obvious advice is to upgrade, but I guess you have your reasons not to. > We looked at a memory dump with Eclipse Memory Analyser, and we were quite > surprised to see that mo

Re: Memory issues with Lucene deployment

2012-09-27 Thread Paul Taylor
On 25/09/2012 20:09, Uwe Schindler wrote: Hi, Without a full output of "free -h" we cannot say anything. But the total Linux memory use should always used by 100% on a good server otherwise it's useless (because full memory includes cache usage, too). I think, -Xmx may be too less for your Jav

RE: Memory issues with Lucene deployment

2012-09-25 Thread Uwe Schindler
Hi, Without a full output of "free -h" we cannot say anything. But the total Linux memory use should always used by 100% on a good server otherwise it's useless (because full memory includes cache usage, too). I think, -Xmx may be too less for your Java deployment? We have no information about

Re: Memory leak?? with CloseableThreadLocal with use of Snowball Filter

2012-08-02 Thread Robert Muir
On Thu, Aug 2, 2012 at 3:13 AM, Laurent Vaills wrote: > Hi everyone, > > Is there any chance to get his backported for a 3.6.2 ? > Hello, I personally have no problem with it: but its really technically not a bugfix, just an optimization. It also doesnt solve the actual problem if you have a tom

Re: Memory leak?? with CloseableThreadLocal with use of Snowball Filter

2012-08-02 Thread Laurent Vaills
Hi everyone, Is there any chance to get his backported for a 3.6.2 ? Regards, Laurent 2012/8/2 Simon Willnauer > On Thu, Aug 2, 2012 at 7:53 AM, roz dev wrote: > > Thanks Robert for these inputs. > > > > Since we do not really Snowball analyzer for this field, we would not use > > it for now.

Re: Memory leak?? with CloseableThreadLocal with use of Snowball Filter

2012-08-01 Thread Dawid Weiss
http://static1.blip.pl/user_generated/update_pictures/1758685.jpg On Thu, Aug 2, 2012 at 8:32 AM, roz dev wrote: > wow!! That was quick. > > Thanks a ton. > > > On Wed, Aug 1, 2012 at 11:07 PM, Simon Willnauer > wrote: > >> On Thu, Aug 2, 2012 at 7:53 AM, roz dev wrote: >> > Thanks Robert for th

Re: Memory leak?? with CloseableThreadLocal with use of Snowball Filter

2012-08-01 Thread roz dev
wow!! That was quick. Thanks a ton. On Wed, Aug 1, 2012 at 11:07 PM, Simon Willnauer wrote: > On Thu, Aug 2, 2012 at 7:53 AM, roz dev wrote: > > Thanks Robert for these inputs. > > > > Since we do not really Snowball analyzer for this field, we would not use > > it for now. If this still does

Re: Memory leak?? with CloseableThreadLocal with use of Snowball Filter

2012-08-01 Thread Simon Willnauer
On Thu, Aug 2, 2012 at 7:53 AM, roz dev wrote: > Thanks Robert for these inputs. > > Since we do not really Snowball analyzer for this field, we would not use > it for now. If this still does not address our issue, we would tweak thread > pool as per eks dev suggestion - I am bit hesitant to do th

Re: Memory leak?? with CloseableThreadLocal with use of Snowball Filter

2012-08-01 Thread roz dev
Thanks Robert for these inputs. Since we do not really Snowball analyzer for this field, we would not use it for now. If this still does not address our issue, we would tweak thread pool as per eks dev suggestion - I am bit hesitant to do this change yet as we would be reducing thread pool which c

Re: Memory leak?? with CloseableThreadLocal with use of Snowball Filter

2012-08-01 Thread Robert Muir
On Tue, Jul 31, 2012 at 2:34 PM, roz dev wrote: > Hi All > > I am using Solr 4 from trunk and using it with Tomcat 6. I am noticing that > when we are indexing lots of data with 16 concurrent threads, Heap grows > continuously. It remains high and ultimately most of the stuff ends up > being moved

Re: Memory question

2012-05-21 Thread Chris Bamford
This is a progress update on the issue: I have tried several things and they all gave improvements. In order of magnitude they are 1) Reduced heap space from 6GB to 3GB. This on it's own has so far been the biggest win as swapping almost completely stopped after this step. 2) Began limiting t

Re: Memory question

2012-05-16 Thread Chris Bamford
Thanks everyone. Looks like I have lots of reading to do :-) -Original Message- From: Nader, John P To: java-user@lucene.apache.org Sent: Wed, 16 May 2012 16:27 Subject: Re: Memory question Another good link is http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523

Re: Memory question

2012-05-16 Thread Nader, John P
Lutz > >-Original Message- >From: Chris Bamford [mailto:chris.bamf...@talktalk.net] >Sent: Dienstag, 15. Mai 2012 16:38 >To: java-user@lucene.apache.org >Subject: Re: Memory question > > > Hi John, > >Very interesting, thanks for the detailed explanation. It certainl

Re: Memory question

2012-05-16 Thread Christoph Kaser
java-user@lucene.apache.org Sent: Tue, 15 May 2012 18:10 Subject: RE: Memory question It mmaps the files into virtual memory if it runs on a 64 bit JVM. Because of that you see the mmapped CFS files. This is outside Java Heap and is all *virtual* no RAM is explicitely occupied except the O/S

Re: Memory question

2012-05-15 Thread Sean Bridges
rs and then closes them based on how full >>the heap is getting. My worry is that if the bulk of the memory is being >>allocated outside the Jvm, how can I make sensible decisions? >> >>Thanks for any pointers / info. >> >>Chris >> >> >> >>---

RE: Memory question

2012-05-15 Thread Lutz Fechner
Regards Lutz -Original Message- From: Chris Bamford [mailto:chris.bamf...@talktalk.net] Sent: Dienstag, 15. Mai 2012 16:38 To: java-user@lucene.apache.org Subject: Re: Memory question Hi John, Very interesting, thanks for the detailed explanation. It certainly sounds like the same

Re: Memory question

2012-05-15 Thread Chris Bamford
ar effect ? Thanks again, - Chris -Original Message- From: Nader, John P To: java-user@lucene.apache.org Sent: Tue, 15 May 2012 21:12 Subject: Re: Memory question We've encountered this issue and came up with a fairly good approach to address it. We are on Lucene 3.0.2 with Java 1.6.0

Re: Memory question

2012-05-15 Thread Nader, John P
nd then closes them based on how full >the heap is getting. My worry is that if the bulk of the memory is being >allocated outside the Jvm, how can I make sensible decisions? > >Thanks for any pointers / info. > >Chris > > > >-Original Message- >From: u...@

Re: RE: Memory question

2012-05-15 Thread Chris Bamford
om: u...@thetaphi.de To: java-user@lucene.apache.org Sent: Tue, 15 May 2012 18:10 Subject: RE: Memory question It mmaps the files into virtual memory if it runs on a 64 bit JVM. Because of that you see the mmapped CFS files. This is outside Java Heap and is all *virtual* no RAM is explicitely occupied excep

RE: Memory question

2012-05-15 Thread Uwe Schindler
It mmaps the files into virtual memory if it runs on a 64 bit JVM. Because of that you see the mmapped CFS files. This is outside Java Heap and is all *virtual* no RAM is explicitely occupied except the O/S cache. - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMai

Re: Memory question

2012-05-15 Thread Ian Lea
In versions from 3.3 onwards MMapDirectory is the default on 64-bit linux. Not sure exactly what that means wrt your questions, but may well be relevant. -- Ian. On Tue, May 15, 2012 at 3:51 PM, Lutz Fechner wrote: > Hi, > > > By design memory outside the JVM heap space should not be accessib

RE: Memory question

2012-05-15 Thread Lutz Fechner
Hi, By design memory outside the JVM heap space should not be accessible for java applications. Why you might see is the disc cache of the Linux storage subsystem. Best Regards Lutz -Original Message- From: Chris Bamford [mailto:chris.bamf...@talktalk.net] Sent: Dienstag, 15. Mai 20

Re: Memory issues

2011-09-05 Thread Toke Eskildsen
On Sat, 2011-09-03 at 20:09 +0200, Michael Bell wrote: > To be exact, there are about 300 million documents. This is running on a 64 > bit JVM/64 bit OS with 24 GB(!) RAM allocated. How much memory is allocated to the JVM? > Now, their searches are working fine IF you do not SORT the results. If

Re: Memory issues

2011-09-05 Thread Stefan Trcek
Michael Bell wrote: > How best to diagnose? > >> Call your java process this way >>java -XX:HeapDumpPath=. -XX:+HeapDumpOnOutOfMemoryError >> and drag'n'drop the resulting java_pid*.hprof into eclipse. >> You will get an outline by class for the number and size of allocated >> objects. Just lo

Re: Memory issues

2011-09-05 Thread Stefan Trcek
On Saturday 03 September 2011 20:09:54 Michael Bell wrote: > 2011-08-30 13:01:31,489 [TP-Processor8] ERROR > com.gwava.utils.ServerErrorHandlerStrategy - reportError: > nastybadthing :: > com.gwava.indexing.lucene.internal.LuceneSearchController.performSear >chOperation:229 :: EXCEPTION : java.lang

RE: Memory issues

2011-09-03 Thread Uwe Schindler
There is no difference between 2.9 and 3.0, ist exactly the same code with only Java 5 specific API modifications and removal of deprecated methods. The issue you have seems to be that maybe your index have grown beyond some limits of your JVM. Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-282

Re: Memory leak with Lucene Search ?

2011-07-12 Thread Ian Lea
Lucene does not cache results. Operating systems do cache things and on unix anyway (no idea about windows) some speedups over time can reasonably be attributed to disk caching by the OS. Have you profiled your app to find out exactly what is using the memory? Do you just use the one searcher or

Re: Memory use and Lucene

2010-04-02 Thread Michael McCandless
OS level tools (top, ps, activity monitor, task manager) aren't great ways to measure Java's memory usage, since they only see how much heap java has allocated from the OS. Within that heap, java can have lots of free space that it knows about but the OS does not (this is Runtime.freeMemory()). Y

Re: memory management style

2010-03-09 Thread Christopher Laux
On Mon, Mar 8, 2010 at 7:52 PM, Michael McCandless wrote: > This was done for performance (to remove alloc/init/GC load). > > There are two parts to it -- first, consolidating what used to be lots > of little objects into shared byte[]/int[] blocks.  Second, reusing > those blocks. Thanks, just o

Re: memory management style

2010-03-08 Thread Michael McCandless
On Mon, Mar 8, 2010 at 1:18 PM, Christopher Laux wrote: > I'm not sure if this is the right list, as it's sort of a development > question too, but I don't want to bother them over there. Anyway, I'm > curious as to the reason for using "manual memory management" a la > ByteBlockPool and consorts

Re: Memory consumed by IndexSearcher

2009-09-23 Thread Karl Wettin
23 sep 2009 kl. 17.55 skrev Mindaugas Žakšauskas: I was kind of hinting on the resource planning. Every decent enterprise application, apart from other things, has to provide its memory requirements, and my point was - if it uses memory, how much of it needs to be allocated? What are the bounda

Re: Memory consumed by IndexSearcher

2009-09-23 Thread Karl Wettin
23 sep 2009 kl. 17.55 skrev Mindaugas Žakšauskas: Luke says: Has deletions? / Optimized? Yes (1614) / No Very quick response, try optimizing your index and see what happends. I'll get back to you unless someone beats me to it. karl

Re: Memory consumed by IndexSearcher

2009-09-23 Thread Mindaugas Žakšauskas
Hi Karl, On Tue, Sep 22, 2009 at 6:58 PM, Karl Wettin wrote: > <..> Thing that > consume the most memory is probably field norms (8 bits per field and > document unless omitted) and flyweighted terms (String#interal), things you > can't really do that much about. I was kind of hinting on the res

Re: Memory consumed by IndexSearcher

2009-09-22 Thread Karl Wettin
Hi Mindaugas, it is - as you sort of point out - the readers associated with your searcher that consumes the memory, and not so much the searcher it self. Thing that consume the most memory is probably field norms (8 bits per field and document unless omitted) and flyweighted terms (Strin

Re: memory leak with CustomComparatorSource class variables

2009-06-13 Thread Yonik Seeley
When implementing your own, it also helps to look at the existing implementations in the FieldComparator class: http://svn.apache.org/viewvc/lucene/java/trunk/src/java/org/apache/lucene/search/FieldComparator.java?revision=764551 -Yonik http://www.lucidimagination.com On Sat, Jun 13, 2009 at 9:

Re: memory leak with CustomComparatorSource class variables

2009-06-13 Thread Michael McCandless
It's here: http://lucene.apache.org/java/docs/nightly/ But remember this is trunk code, ie not yet released, so stuff is still changing. Mike On Sat, Jun 13, 2009 at 9:30 AM, Marc Sturlese wrote: > > Thanks Mike, really useful info. I have dowloaded the latest Lucene 2.9-dev > to test the imp

Re: memory leak with CustomComparatorSource class variables

2009-06-13 Thread Marc Sturlese
Thanks Mike, really useful info. I have dowloaded the latest Lucene 2.9-dev to test the implementation of a FieldComparatorSource but the API documentation doesn't seem to be availabe. I can access to the class MissingStringLastComparatorSource: http://lucene.apache.org/solr/api/org/apache/solr/

Re: memory leak with CustomComparatorSource class variables

2009-06-13 Thread Michael McCandless
On Fri, Jun 12, 2009 at 6:09 PM, Marc Sturlese wrote: > I have noticed I am experiencing sort of a memory leak with a > CustomComparatorSource (wich implements SortComparatorSource). > I have a HashMap declared as variable of class in CustomComparatorSource: This is unfortunately a known and rath

Re: Memory Leak?

2009-03-26 Thread Michael McCandless
OK thanks for bringing closure. Mike On Thu, Mar 26, 2009 at 8:37 AM, Chetan Shah wrote: > > Ok. I was able to conclude that the I am getting OOME due to my usage of HTML > Parser to get the HTML title and HTML text. I display 10 results per page > and therefore end up calling the org.apache.luc

Re: Memory Leak?

2009-03-26 Thread Chetan Shah
Ok. I was able to conclude that the I am getting OOME due to my usage of HTML Parser to get the HTML title and HTML text. I display 10 results per page and therefore end up calling the org.apache.lucene.demo.html.HTMLParser 10 times. I modified my code to store the title and html summary in the

Re: Memory Leak?

2009-03-24 Thread Paul Smith
No, I don't hit OOME if I comment out the call to getHTMLTitle. The heap behaves perfectly. I completely agree with you, the thread count goes haywire the moment I call the HTMLParser.getTitle(). I have seen a thread count of like 600 before my I hit OOME (with the getTitle() call on) and

Re: Memory Leak?

2009-03-24 Thread Michael McCandless
Actually, I was hoping you could try leaving the getHTML calls in, but increase the heap size of your Tomcat instance. Ie, to be sure there really is a leak vs you're just not giving the JRE enough memory. I do like your hypothesis, but looking at HTMLParser it seems like the thread should exit a

Re: Memory Leak?

2009-03-24 Thread Chetan Shah
Highly appreciate your replies Michael. No, I don't hit OOME if I comment out the call to getHTMLTitle. The heap behaves perfectly. I completely agree with you, the thread count goes haywire the moment I call the HTMLParser.getTitle(). I have seen a thread count of like 600 before my I hit OOME

Re: Memory Leak?

2009-03-24 Thread Michael McCandless
Odd. I don't know of any memory leaks w/ the demo HTMLParser, hmm though it's doing some fairly scary stuff in its getReader() method. EG it spawns a new thread every time you run it. And, it's parsing the entire HTML document even though you only want the title. You may want to switch to better

Re: Memory Leak?

2009-03-24 Thread Chetan Shah
After some more researching I discovered that the following code snippet seems to be the culprit. I have to call this to get the "title" of the indexed html page. And this is called 10 times as my I display 10 results on a page. Any Suggestions on how to achieve this without the OOME issue.

Re: Memory Leak?

2009-03-23 Thread Michael McCandless
Is there anything else in this JRE? 65 MB ought to be plenty for what you are trying to do w/ just Lucene, I think. Though to differentiate whether "you are not giving enough RAM to Lucene" vs "you truly have a memory leak", you should try increasing the heap size to something absurdly big (256

Re: Memory Leak?

2009-03-23 Thread Chetan Shah
I am using the default heap size which according to Netbeans is around 65MB. If the RAM directory was not initialized correctly, how am I getting valid search results? I am able to execute searches for quite some time before I get OOME. Makes Sense? Or Maybe I am missing something, please let m

Re: Memory Leak?

2009-03-23 Thread Matthew Hall
Perhaps this is a simple question, but looking at your stack trace, I'm not seeing where it was set during the tomcat initialization, so here goes: Are you setting up the jvm's heap size during your Tomcat initialization somewhere? If not, that very well could be part of your issue, as the st

Re: Memory Leak?

2009-03-23 Thread Chetan Shah
The stack trace is attached. http://www.nabble.com/file/p22667542/dump dump The file size of _30.cfx - 1462KB _32.cfs - 3432KB _30.cfs - 645KB Michael McCandless-2 wrote: > > > Hmm... after how many queries do you see the crash? > > Can you post the full OOME stack trace? > > You're

Re: Memory Leak?

2009-03-23 Thread Michael McCandless
Hmm... after how many queries do you see the crash? Can you post the full OOME stack trace? You're using a RAMDirectory to hold the entire index... how large is your index? Mike Chetan Shah wrote: After reading this forum post : http://www.nabble.com/Lucene-Memory-Leak-tt19276999.html#a

Re: Memory Leak?

2009-03-23 Thread Chetan Shah
After reading this forum post : http://www.nabble.com/Lucene-Memory-Leak-tt19276999.html#a19364866 I created a Singleton For Standard Analyzer too. But the problem still persists. I have 2 singletons now. 1 for Standard Analyzer and other for IndexSearcher. The code is as follows : package w

Re: Memory Leak?

2009-03-23 Thread Chetan Shah
No, I have a singleton from where I get my searcher and it is kept through out the application. Michael McCandless-2 wrote: > > > Are you not closing the IndexSearcher? > > Mike > > Chetan Shah wrote: > >> >> I am initiating a simple search and after profiling the my >> application using

Re: Memory Leak?

2009-03-23 Thread Michael McCandless
Are you not closing the IndexSearcher? Mike Chetan Shah wrote: I am initiating a simple search and after profiling the my application using NetBeans. I see a constant heap consumption and eventually a server (tomcat) crash due to "out of memory" error. The thread count also keeps on inc

Re: Memory during Indexing

2009-03-12 Thread Grant Ingersoll
On Mar 12, 2009, at 10:47 AM, Niels Ott wrote: Michael McCandless schrieb: When RAM is full, IW flushes the pending changes to disk, but does not commit them, meaning external (newly opened or reopened) readers will not see the changes. Is there a built-in mechanism in the IndexReader to

Re: Memory during Indexing

2009-03-12 Thread Niels Ott
Michael McCandless schrieb: When RAM is full, IW flushes the pending changes to disk, but does not commit them, meaning external (newly opened or reopened) readers will not see the changes. Is there a built-in mechanism in the IndexReader to reload the index every now and then, after having c

Re: Memory during Indexing

2009-03-12 Thread Michael McCandless
Niels Ott wrote: Hi Mark, markharw00d schrieb: Hi Niels, See the javadocs for IndexWriter.setRAMBufferSizeMB() I tried different settings. Apart from the fact that my memory issue seems to by my own fault, I'm wondering what Lucene does in the background. Apparently it does flush(), but

Re: Memory during Indexing

2009-03-11 Thread Niels Ott
Hi Mark, markharw00d schrieb: Hi Niels, See the javadocs for IndexWriter.setRAMBufferSizeMB() I tried different settings. Apart from the fact that my memory issue seems to by my own fault, I'm wondering what Lucene does in the background. Apparently it does flush(), but not commit()? At le

Re: Memory during Indexing

2009-03-11 Thread markharw00d
Hi Niels, See the javadocs for IndexWriter.setRAMBufferSizeMB() Cheers Mark Niels Ott wrote: Hi Lucene professionals! This may sound like a dumb beginner's question, but anyways: Can Lucene run out of memory during indexing? Should I use IndexWriter.flush() or .commit(), and if so, how ofte

Re: Memory Eaten up by TermInfo Instances in Lucene 2.4

2009-02-10 Thread Michael McCandless
chanchitodata wrote: I actually dont hit OOM, The memory gets 100% full and the JVM hangs. Is it GC'ing during this hang? Can you try reducing your heap size down alot and see if the GC runs faster? (Or, if you can provoke an OOM). How large is your heap now? Independently what type

Re: Memory Eaten up by TermInfo Instances in Lucene 2.4

2009-02-10 Thread chanchitodata
Hi Michael, I actually dont hit OOM, The memory gets 100% full and the JVM hangs. Independently what type of GC alogorithm I use. Have tried all sorts of JVM GC flags. Profiling the application with YourKit I can see that the TermInfo instances does not get freed up when the GC is done. The appl

Re: Memory Eaten up by TermInfo Instances in Lucene 2.4

2009-02-10 Thread Michael McCandless
Your index has relatively few terms: ~13 million. Lucene stores TermInfo instances in two places. The first place is a persistent array, called the terms index, of every 128th term. It's created when the IndexReader is first opened. So in your case this is ~100.000 ("100 thousand") instances.

Re: Memory Eaten up by TermInfo Instances in Lucene 2.4

2009-02-10 Thread chanchitodata
Hi Michael, I´m pretty sure that the IndexReaders are being closed. As I said I use Compass and compass handles all the IndexReader stuff for me. I have discussed this issue with Shay Banon for a while in the Compass forum and he was the guy that lead me to this forum after several diferents tes

Re: Memory Eaten up by TermInfo Instances in Lucene 2.4

2009-02-09 Thread Michael McCandless
Are you certain that old IndexReaders are being closed? If you are not using CFS file format, how large are your *.tii files? If you are using CFS file format, can you run CheckIndex on your index and post the output? This way we can see how many terms are in the index (which is what get

Re: memory leak getting docs

2008-11-05 Thread Erick Erickson
That's also why your app runs so slowly, opening an IndexReader is a very expensive operation, doing it for every doc is exceedingly bad... Best Erick On Wed, Nov 5, 2008 at 3:21 PM, bruno da silva <[EMAIL PROTECTED]> wrote: > Hello Marc > I'd suggest you create the IndexSearcher outside of your

Re: memory leak getting docs

2008-11-05 Thread bruno da silva
Hello Marc I'd suggest you create the IndexSearcher outside of your method and pass the indexreader as a parameter ... like : private Document getDocumentData(IndexReader reader, String id) you don't have a memory leak you have a intensive use of memory.. On Wed, Nov 5, 2008 at 3:11 PM, Marc S

Re: Memory problem dealing with indexsearcher and topdocs

2008-10-27 Thread Erick Erickson
Are you opening/closing your searcher and writer for each document? If so, it sounds like you're not closing all of them appropriately and that would be the cause of your memory increase. But you shouldn't have to do that anyway. Why not just use the same IndexReader to search and delete all your d

RE: Memory eaten up by String, Term and TermInfo?

2008-10-06 Thread Edwin Lee
will try them instead of my own GC thread to > see whether the problem can also be solved. > > Thanks Brian! > > Regards, > Gong > > > -Original Message- > > From: Beard, Brian [mailto:[EMAIL PROTECTED] > > Sent: Monday, October 06, 2008 8:48

RE: Memory eaten up by String, Term and TermInfo?

2008-10-06 Thread Peter Cheng
Gong > -Original Message- > From: Beard, Brian [mailto:[EMAIL PROTECTED] > Sent: Monday, October 06, 2008 8:48 PM > To: java-user@lucene.apache.org > Subject: RE: Memory eaten up by String, Term and TermInfo? > > I played around with GC quite a bit in our app and

RE: Memory eaten up by String, Term and TermInfo?

2008-10-06 Thread Beard, Brian
hotspot/gc/index.jsp -Original Message- From: Peter Cheng [mailto:[EMAIL PROTECTED] Sent: Sunday, October 05, 2008 7:55 AM To: java-user@lucene.apache.org Subject: RE: Memory eaten up by String, Term and TermInfo? I have confirmed that the OutOfMemoryError is not Lucene's problem. It

RE: Memory eaten up by String, Term and TermInfo?

2008-10-05 Thread Peter Cheng
essage- > From: Michael McCandless [mailto:[EMAIL PROTECTED] > Sent: Sunday, September 14, 2008 10:28 PM > To: java-user@lucene.apache.org > Subject: Re: Memory eaten up by String, Term and TermInfo? > > > Small correction: it was checked in this morning (at least, on the

RE: Memory eaten up by String, Term and TermInfo?

2008-09-14 Thread Peter Cheng
I'll try later and report back ASAP. You know, it takes days to cause OOM. Thank you all! Gong > -Original Message- > From: Michael McCandless [mailto:[EMAIL PROTECTED] > Sent: Sunday, September 14, 2008 10:28 PM > To: java-user@lucene.apache.org > Subject: Re: Memor

Re: Memory eaten up by String, Term and TermInfo?

2008-09-14 Thread Michael McCandless
Small correction: it was checked in this morning (at least, on the East Coast of the US). So you need to either build your own JAR using Lucene's trunk, or, wait for tonite's build to run and then download the build artifacts from here: http://hudson.zones.apache.org/hudson/job/Luce

Re: Memory eaten up by String, Term and TermInfo?

2008-09-14 Thread Chris Lu
Can you try to update to the latest Lucene svn version, like yesterday? LUCENE-1383 was checked in yesterday. This patch is addressing a leak problem particular to J2EE applications. -- Chris Lu - Instant Scalable Full-Text Search On Any Database/Application site: http://w

Re: memory leak during Lucene Search

2008-09-09 Thread Chris Lu
Thanks for the link! I will post the problem there. In the mean time, any J2EE application developers should know this problem and try to avoid Lucene checked out on or after May 23,2008, svn version 659602. I tried svn 659601, which worked fine. I will follow up on this email list when the proble

Re: memory leak during Lucene Search

2008-09-09 Thread Grant Ingersoll
Just chipping in that I recall there being a number of discussions on java-dev about ThreadLocal and web containers and how they should be handled. Not sure if it pertains here or not, but you might find http://lucene.markmail.org/message/keosgz2c2yjc7qre?q=ThreadLocal helpful. You might a

Re: Memory leaks during indexing.

2008-07-22 Thread Michael McCandless
Can you post the Python sources of the Lucene part of your application? One thing to check is how the JRE is being instantiated from Python, ie, what the equivalent setting is for -Xmx (= max heap size). It's possible the 140 MB consumption is actually "OK" as far as the JRE is concerned,

Re: Memory Usage

2008-07-03 Thread Keith Watson
Thanks very much for this; I'll give it a shot. Keith. On 4 Jul 2008, at 00:02, Paul Smith wrote: (there are around 6,000,000 posts on the message board database) Date encoded as yyMMdd: appears to be using around 30M Date encoded as yyMMddHHmmss: appears to be using more than 400M! I g

Re: Memory Usage

2008-07-03 Thread Paul Smith
(there are around 6,000,000 posts on the message board database) Date encoded as yyMMdd: appears to be using around 30M Date encoded as yyMMddHHmmss: appears to be using more than 400M! I guess I would have understood if I was seeing the usage double for sure, or even a little more; no idea

Re: Memory Leak when using Custom Sort (i.e., DistanceSortSource) of LocalLucene with Lucene

2008-06-10 Thread Otis Gospodnetic
Hi Ethan, Yes, it would be good to have this in JIRA. Please see http://wiki.apache.org/lucene-java/HowToContribute for info about generating the patch, etc. Thanks, Otis -- Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch - Original Message > From: Ethan Tao <[EMAIL PROTEC

Re: Memory leak (JVM 1.6 only)

2007-05-20 Thread Stephen Gray
Thanks, the link was helpful. I'll let you know if I find anything. Thanks for all the replies to this. Steve Doron Cohen wrote: Stephen Gray wrote: Thanks. If the extra memory allocated is native memory I don't think jconsole includes it in "non-heap" as it doesn't show this as increasin

Re: Memory leak (JVM 1.6 only)

2007-05-18 Thread Doron Cohen
Stephen Gray wrote: > Thanks. If the extra memory allocated is native memory I don't think > jconsole includes it in "non-heap" as it doesn't show this as > increasing, and jmap/jhat just dump/analyse the heap. Do you know of an > application that can report native memory usage? Sorry, but I didn

Re: Memory leak (JVM 1.6 only)

2007-05-18 Thread Bill Au
I actually had to deal with a leak in non-heap native memory once. I am running on Linux so I just use good old "ps" to monitor native memory usage. Bill On 5/18/07, Stephen Gray <[EMAIL PROTECTED]> wrote: Thanks. If the extra memory allocated is native memory I don't think jconsole includes

Re: Memory leak (JVM 1.6 only)

2007-05-17 Thread Stephen Gray
Thanks. If the extra memory allocated is native memory I don't think jconsole includes it in "non-heap" as it doesn't show this as increasing, and jmap/jhat just dump/analyse the heap. Do you know of an application that can report native memory usage? Thanks, Steve Doron Cohen wrote: Stephen

Re: Memory leak (JVM 1.6 only)

2007-05-17 Thread Doron Cohen
Stephen Gray <[EMAIL PROTECTED]> wrote on 17/05/2007 22:40:01: > One interesting thing is that although the memory allocated as > reported by the processes tab of Windows Task Manager goes up and up, > and the JVM eventually crashes with an OutOfMemory error, the total size > of heap + non-heap as

Re: Memory leak (JVM 1.6 only)

2007-05-17 Thread Stephen Gray
Hi Otis, Thanks very much for your reply. I've removed the LuceneIndexAccessor code, and still have the same problem, so that at least rules out LuceneIndexAccessor as the source. maxBufferedDocs is just set to the default, which I believe is 10. I've tried jconsole, + jmap/jhat for looking

Re: Memory leak (JVM 1.6 only)

2007-05-17 Thread Otis Gospodnetic
Hi Steve, You said the OOM happens only when you are indexing. You don't need LuceneIndexAccess for that, so get rid of that to avoid one suspect that is not part of Lucene core. What is your maxBufferedDocs set to? And since you are using JVM 1.6, check out jmap, jconsole & friends, they'll

Re: Memory leak (JVM 1.6 only)

2007-05-16 Thread Antony Bowesman
Daniel Noll wrote: On Tuesday 15 May 2007 21:59:31 Narednra Singh Panwar wrote: try using -Xmx option with your Application. and specify maximum/ minimum memory for your Application. It's funny how a lot of people instantly suggest this. What if it isn't possible? There was a situation a wh

Re: Memory leak (JVM 1.6 only)

2007-05-15 Thread Stephen Gray
Thanks, that narrows it down a bit. Thanks for all the replies to my question. Steve Mark Miller wrote: I don't have much help to offer other than to say I am also using a tweaked version of the IndexAccess code you are, with java 1.6, with hundreds of thousands to millions of docs, at multip

Re: Memory leak (JVM 1.6 only)

2007-05-15 Thread Daniel Noll
On Tuesday 15 May 2007 21:59:31 Narednra Singh Panwar wrote: > try using -Xmx option with your Application. and specify maximum/ minimum > memory for your Application. It's funny how a lot of people instantly suggest this. What if it isn't possible? There was a situation a while back where I sa

Re: Memory leak (JVM 1.6 only)

2007-05-15 Thread Narednra Singh Panwar
try using -Xmx option with your Application. and specify maximum/ minimum memory for your Application. Hope this will solve you problem. On 5/15/07, Stephen Gray <[EMAIL PROTECTED]> wrote: Hi everyone, I have an application that indexes/searches xml documents using Lucene. I'm having a prob

Re: Memory leak (JVM 1.6 only)

2007-05-15 Thread Mark Miller
I don't have much help to offer other than to say I am also using a tweaked version of the IndexAccess code you are, with java 1.6, with hundreds of thousands to millions of docs, at multiple locations, for months -- and I have not seen any memory leaks. Leads me to think the leak may be with y

Re: memory consumption on large indices

2007-03-14 Thread Tim Patton
I'm searching a 20GB index and my searching JVM is allocated 1Gig. However, my indexing app only had 384mb availible to it, which means you can get away with far less. I believe certain index tables will need to be swapped in and out of memory though so it may not search as quickly. With a 1.

Re: memory consumption on large indices

2007-03-14 Thread Ian Lea
When your app gets a java.lang.OutOfMemory exception. -- Ian. On 3/14/07, Dennis Berger <[EMAIL PROTECTED]> wrote: Ian Lea schrieb: > No, you don't need 1.8Gb of memory. Start with default and raise if > you need to? how do I know when I need it? > Or jump straight in at about 512Mb. > > > -

  1   2   >