One of the folks at OpenSource Connections put a lucene derivative on an Intel
Edison (?) (tiny board computer; raspberry pi like) a couple of years ago. That
project might have something to offer you in terms of ideas.
Sorry was only able to find a link for their project on same for Cassandra
Hi,
In Lucene 7.7 you can use ByteBuffersDirectory, the default constructor behaves
like RAMDirectory (allocates on heap), but has much better concurrency and
garbage collection behaviour (no millions of byte[8192] instances holding the
data)
http://lucene.apache.org/core/7_7_0/core/org/apache/
Thanks for the input.
I am not using Solr.
Also, my index has a fixed size, I am not going to update it.
-Original Message-
From: googoo [mailto:liu...@gmail.com]
Sent: 18 July 2012 15:21
To: java-user@lucene.apache.org
Subject: Re: In memory Lucene configuration
Doron,
To verify
I had a threading issue in the client code calling Lucene, really nothing that
has anything to do with this list :)
-Original Message-
From: Simon Willnauer [mailto:simon.willna...@gmail.com]
Sent: 18 July 2012 21:48
To: java-user@lucene.apache.org
Subject: Re: In memory Lucene
Hi,
just to clarify:
> In additional, i don't think load whole index to memory is good idea.
Since the
> index size will always increase.
> For me, i change lucene code to disable MMapDirectory, since the index
size is
> bigger and bigger.
> And MMapDirectory will call something like c++ share me
> From: Doron Yaacoby [mailto:dor...@gingersoftware.com]
> Sent: 16 July 2012 09:43
> To: java-user@lucene.apache.org
> Subject: RE: In memory Lucene configuration
>
> I haven't tried that yet, but it's an option. The reason I'm waiting on this
> is that I a
Doron,
To verify actual query speed, i think you may need:
1) do not run index job
2) in solrconfig.xml, set filterCache and queryResultCache value to 0
3) restart solr
4) run the query and check the qtime result
That may give you some idea what is actual query time.
To break down query time, yo
@lucene.apache.org
Subject: RE: In memory Lucene configuration
I haven't tried that yet, but it's an option. The reason I'm waiting on this is
that I am expecting many concurrent requests to my application anyway, so
having multiple search threads per request might not be the best idea
08:26
> To: java-user@lucene.apache.org
> Subject: Re: In memory Lucene configuration
>
> Have you tried sharding your data? Since you have a fast multi-core box, why
> not split your indices N-ways, say the smaller one into 4, and the larger
> into 8. Then you can have a pool of dedi
Vitaly Funstein [mailto:vfunst...@gmail.com]
Sent: 16 July 2012 08:26
To: java-user@lucene.apache.org
Subject: Re: In memory Lucene configuration
Have you tried sharding your data? Since you have a fast multi-core box, why
not split your indices N-ways, say the smaller one into 4, and the larger into
]
Sent: 15 July 2012 13:40
To: java-user@lucene.apache.org; simon.willna...@gmail.com
Subject: RE: In memory Lucene configuration
Thanks for the quick input!
I ran a few more tests with your suggested configuration (-Xmx1G -Xms1G with
MMapDirectory). At the third time I ran the same test I fi
t; Sent: 15 July 2012 11:56
> To: java-user@lucene.apache.org
> Subject: Re: In memory Lucene configuration
>
> hey there,
>
> On Sun, Jul 15, 2012 at 10:41 AM, Doron Yaacoby
> wrote:
>> Hi, I have the following situation:
>>
>> I have two pretty large indic
;t mention before that I'm using Lucene 3.5 and Java 1.7.
-Original Message-
From: Simon Willnauer [mailto:simon.willna...@gmail.com]
Sent: 15 July 2012 11:56
To: java-user@lucene.apache.org
Subject: Re: In memory Lucene configuration
hey there,
On Sun, Jul 15, 2012 at 10:41 AM, Doron
hey there,
On Sun, Jul 15, 2012 at 10:41 AM, Doron Yaacoby
wrote:
> Hi, I have the following situation:
>
> I have two pretty large indices. One consists of about 1 billion documents
> (takes ~6GB on disk) and the other has about 2 billion documents (~10GB on
> disk). The documents are very sho
14 matches
Mail list logo