Hi, interesting discussion.
Supose my index now has 1 TB. I splitted into 16 hds (65GB per hd) in the
same machine with 16 cores.
Use parallelmultisearch it's a good idea for this structure?? Results wil be
fast?? Is there a better solution for this structure?
thanks,
On Fri, Oct 23, 2009 at 9:33
On Fri, 2009-10-23 at 08:49 +0200, Jake Mannix wrote:
> One of the big problems you'll run into with this index size is that
> you'll never have enough RAM to give your OS's IO cache enough room to keep
> much of this index in memory, so you're going to be seeking in this monster
> file a lot. [.
at 11:29 PM, Hrishikesh Agashe <
hrishikesh_aga...@persistent.co.in> wrote:
> Thanks Jake.
>
> I have around 75 TB data to be indexed. So even though I do the sharding,
> individual index file size might still be pretty high. And that's why I
> wanted to find out wheth
Thanks Jake.
I have around 75 TB data to be indexed. So even though I do the sharding,
individual index file size might still be pretty high. And that's why I wanted
to find out whether there is any limit as such. And obviously whether such a
huge index files can be searched at all.
On Thu, Oct 22, 2009 at 10:29 PM, Hrishikesh Agashe <
hrishikesh_aga...@persistent.co.in> wrote:
> Can I create an index file with very large size, like 1 TB or so? Is there
> any limit on how large index file one can create? Also, will I be able to
> search on this 1 TB index file at all?
>
Leav
I am running Ubuntu 9.04 on 64 bit machine with NAS of 100 TB capacity. JVM is
running with 2.5 GB Xmx.
Can I create an index file with very large size, like 1 TB or so? Is there any
limit on how large index file one can create? Also, will I be able to search on
this 1 TB index file at all?
: Subject: Maximum index file size
: References:
: In-Reply-To:
http://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists
When starting a new discussion on a mailing list, please do not reply to
an existing message, instead start a fresh email. Even if you change the
Hi,
I am running Ubuntu 9.04 on 64 bit machine with NAS of 100 TB capacity. JVM is
running with 2.5 GB Xmx.
Can I create an index file with very large size, like 1 TB or so? Is there any
limit on how large index file one can create? Also, will I be able to search on
this 1 TB index file at all
7;t
worry about this unless your op system is a problem. What are you
running on?
Erick
On 8/13/07, rohit saini <[EMAIL PROTECTED]> wrote:
>
> Hi all,
>
> I have bulk of data to be indexed and that may cross index file size of
> 2GB.
> As lucene faq tells that if index file s
.php?title=Create_Lucene_Database_Search_in_3_minutes
On 8/12/07, rohit saini <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I have bulk of data to be indexed and that may cross index file size of 2GB.
> As lucene faq tells that if index file size increses to 2GB there will be
> problems. but faq tells to make index su
Hi all,
I have bulk of data to be indexed and that may cross index file size of 2GB.
As lucene faq tells that if index file size increses to 2GB there will be
problems. but faq tells to make index subdirectory in this case. I have
tried to do so made a index subdirectory in index main directory
On 3/28/07, Scott Oshima <[EMAIL PROTECTED]> wrote:
So I assumed a linear decay of performance as an index got bigger.
For some reason when going from an index size of 1.89 to 1.95 gigs
dramatically increased cpu across all of our servers.
I was thinking of splitting the 1.95 index into 2 separ
Just adding
5% more stored data(unidexed of course) pushes us over some sort of
threshold causing performance to tank.
-Original Message-
From: Erik Hatcher [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 28, 2007 12:46 PM
To: java-user@lucene.apache.org
Subject: Re: index file size thresho
Hatcher [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 28, 2007 12:46 PM
To: java-user@lucene.apache.org
Subject: Re: index file size threshold affecting search performance?
I've just built a 9.3G index (admittedly tons of stored data in there,
3.3M documents) and performance is amazing (th
I've just built a 9.3G index (admittedly tons of stored data in
there, 3.3M documents) and performance is amazing (through Solr).
Erik
On Mar 28, 2007, at 3:11 PM, Erick Erickson wrote:
This surprises me, I'm currently working with a 4G index, and the
improvement from when it was a
This surprises me, I'm currently working with a 4G index, and the
improvement from when it was an 8G index was only 10% or so.
And it's plenty speedy.
Are you hitting hardware limitations and perhaps swapping like
crazy? In which case, unless you split things across several
machines, I doubt it w
So I assumed a linear decay of performance as an index got bigger.
For some reason when going from an index size of 1.89 to 1.95 gigs
dramatically increased cpu across all of our servers.
I was thinking of splitting the 1.95 index into 2 separate indexes and
using a multisearcher on those parts
On 27 Oct 2005, at 10:21, Chandramohan wrote:
In general, will index size be equal to the size of
the document? Also, does Lucene employ any index
compression schemes? I am a relatively new user of
Lucene and I just love it!
It depends on how you create Field's. The general rule of thumb I
In general, will index size be equal to the size of
the document? Also, does Lucene employ any index
compression schemes? I am a relatively new user of
Lucene and I just love it!
thanks!
__
Yahoo! Mail - PC Magazine Editors' Choice 200
19 matches
Mail list logo