Re: Memory leak?? with CloseableThreadLocal with use of Snowball Filter

2012-08-01 Thread Dawid Weiss
http://static1.blip.pl/user_generated/update_pictures/1758685.jpg On Thu, Aug 2, 2012 at 8:32 AM, roz dev wrote: > wow!! That was quick. > > Thanks a ton. > > > On Wed, Aug 1, 2012 at 11:07 PM, Simon Willnauer > wrote: > >> On Thu, Aug 2, 2012 at 7:53 AM, roz dev wrote: >> > Thanks Robert for th

Re: Memory leak?? with CloseableThreadLocal with use of Snowball Filter

2012-08-01 Thread roz dev
wow!! That was quick. Thanks a ton. On Wed, Aug 1, 2012 at 11:07 PM, Simon Willnauer wrote: > On Thu, Aug 2, 2012 at 7:53 AM, roz dev wrote: > > Thanks Robert for these inputs. > > > > Since we do not really Snowball analyzer for this field, we would not use > > it for now. If this still does

Re: Memory leak?? with CloseableThreadLocal with use of Snowball Filter

2012-08-01 Thread Simon Willnauer
On Thu, Aug 2, 2012 at 7:53 AM, roz dev wrote: > Thanks Robert for these inputs. > > Since we do not really Snowball analyzer for this field, we would not use > it for now. If this still does not address our issue, we would tweak thread > pool as per eks dev suggestion - I am bit hesitant to do th

Re: Memory leak?? with CloseableThreadLocal with use of Snowball Filter

2012-08-01 Thread roz dev
Thanks Robert for these inputs. Since we do not really Snowball analyzer for this field, we would not use it for now. If this still does not address our issue, we would tweak thread pool as per eks dev suggestion - I am bit hesitant to do this change yet as we would be reducing thread pool which c

Re: Memory leak?? with CloseableThreadLocal with use of Snowball Filter

2012-08-01 Thread Robert Muir
On Tue, Jul 31, 2012 at 2:34 PM, roz dev wrote: > Hi All > > I am using Solr 4 from trunk and using it with Tomcat 6. I am noticing that > when we are indexing lots of data with 16 concurrent threads, Heap grows > continuously. It remains high and ultimately most of the stuff ends up > being moved

Re: Lucene vs SQL.

2012-08-01 Thread Konstantyn Smirnov
If you tokenize AND store fields in your document, you can always pull them and re-invert using another analyzer, so you don't need to store the "original data" somewhere else. The point is rather the performance. I started a discussion on that topic http://lucene.472066.n3.nabble.com/Performance

Re: is there a way to control when merges happen?

2012-08-01 Thread Konstantyn Smirnov
Hi Mike. I have a LogDocMergePolicy + ConcurrentMergeScheduler in my setup. I tried adding new segments with 800-5000 documents in each of them in a row, but the scheduler seemed to ignore them at first... only after some time it managed to merge some of them. I have an option to use a quartz-sch