Hi All,
is it now possible to release the memory after every search in lucene
for 50 GB of records.
testn wrote:
>
> I think you store dateSc with full precision i.e. with time. You should
> consider to index it just date part or to the resolution you really need.
> It should reduce the m
HI testn,
I used to search now 3 folders of total size 1.5.GB.still it
consumes lot of memory for every search.i used to close all the IndexReader
once i finish the search.i optimize the FIles using Luke.when i set the
IndexSearcher object as application level object,its not possibl
As I mentioned, IndexReader is the one that holds the memory. You should
explicitly close the underlying IndexReader to make sure that the reader
releases the memory.
Sebastin wrote:
>
> Hi testn,
> Every IndexFolder is of size 1.5 GB of size,eventhough when
> i used to Open and
Hi testn,
Every IndexFolder is of size 1.5 GB of size,eventhough when i
used to Open and close the IndexSearcher it wont release the memory for all
the searches.
When i set the IndexSearcher object as the Application Scope
object its not possile for me to see current da
HI testn,
could you trigger me out the You can simply create a wrapper
that return MultiReader which you can cache for a while and close the oldest
index once the date rolls.,this point in detail.i am not able to get that.
testn wrote:
>
> If you know that there are only 15 days
If you know that there are only 15 days of indexes you need to search on, you
just need to open only the latest 15 indexes at a time right? You can simply
create a wrapper that return MultiReader which you can cache for a while and
close the oldest index once the date rolls.
Sebastin wrote:
>
>
HI testn,
it gives performance improvement while optimizing the Index.
Now i seprate the IndexStore on a daily basis.(ie)
For Every Day it create a new Index store ,sep- 08-2007,sep-09-2007 like
wise it will minimize the size of the IndexStore.could you give me an idea
on how to open every day
So did you see any improvement in performance?
Sebastin wrote:
>
> It works finally .i use Lucene 2.2 in my application.thanks testn and
> Mike
>
> Michael McCandless-2 wrote:
>>
>>
>> It sounds like there may be a Lucene version mismatch? When Luke was
>> used
>> it was likely based on Luc
It works finally .i use Lucene 2.2 in my application.thanks testn and Mike
Michael McCandless-2 wrote:
>
>
> It sounds like there may be a Lucene version mismatch? When Luke was used
> it was likely based on Lucene 2.2, but it sounds like an older version of
> Lucene is now being used to open
As Mike mentioned, what is the version of Lucene you are using? Plus can you
also post the stacktrace?
Sebastin wrote:
>
> Hi testn,
> i wrote the case wrongly actually the error is
>
> java.io.ioexception file not found-segments
>
> testn wrote:
>>
>> Should the file be "segme
Hi testn,
i wrote the case wrongly actually the error is
java.io.ioexception file not found-segments
testn wrote:
>
> Should the file be "segments_8" and "segments.gen"? Why is it "Segment"?
> The case is different.
>
>
> Sebastin wrote:
>>
>> java.io.IoException:File Not Found
It sounds like there may be a Lucene version mismatch? When Luke was used
it was likely based on Lucene 2.2, but it sounds like an older version of
Lucene is now being used to open the index?
Mike
"testn" <[EMAIL PROTECTED]> wrote:
>
> Should the file be "segments_8" and "segments.gen"? Why is
Should the file be "segments_8" and "segments.gen"? Why is it "Segment"? The
case is different.
Sebastin wrote:
>
> java.io.IoException:File Not Found- Segments is the error message
>
> testn wrote:
>>
>> What is the error message? Probably Mike, Erick or Yonik can help you
>> better on this
java.io.IoException:File Not Found- Segments is the error message
testn wrote:
>
> What is the error message? Probably Mike, Erick or Yonik can help you
> better on this since I'm no one in index area.
>
> Sebastin wrote:
>>
>> HI testn,
>> 1.I optimize the Large Indexes of size
What is the error message? Probably Mike, Erick or Yonik can help you better
on this since I'm no one in index area.
Sebastin wrote:
>
> HI testn,
> 1.I optimize the Large Indexes of size 10 GB using Luke.it
> optimize all the content into a single CFS file and it generates
> segmen
HI testn,
1.I optimize the Large Indexes of size 10 GB using Luke.it
optimize all the content into a single CFS file and it generates
segments.gen and segments_8 file when i search the item it shows an error
that segments file is not there.could you help me in this
testn wrote:
>
>
1. You can close the searcher once you're done. If you want to reopen the
index, you can close and reopen only the updated 3 readers and keep the 2
old indexreaders and reuse it. It should reduce the time to reopen it.
2. Make sure that you optimize it every once in a while
3. You might consider s
The problem in my pplication are as follows:
1.I am not able to see the updated records in my index
store because i instantiate
IndexReader and IndexSearcher class once that is in the first search.further
searches use the same IndexReaders(5 Directories) and IndexSearcher with
di
i wont close the IndexReader after the First Search.when i instantiate
IndexSearcher object will it display the updated records in that directories
Sebastin wrote:
>
> I set IndexSearcher as the application Object after the first search.
>
> here is my code:
>
> if
: I set IndexSearcher as the application Object after the first search.
...
: how can i reconstruct the new IndexSearcher for every hour to see the
: updated records .
i'm confused ... my understanding based on the comments you made below
(in an earlier message) was that you already *wer
I set IndexSearcher as the application Object after the first search.
here is my code:
if(searcherOne.isOpen()==(true)){
Directory compressDir2 =
FSDirectory.getDirectory(compressionSourceDi
: I use StandardAnalyzer.the records daily ranges from 5 crore to 6 crore. for
: every second i am updating my Index. i instantiate IndexSearcher object one
: time for all the searches. for an hour can i see the updated records in the
: indexstore by reinstantiating IndexSearcher object.but the pr
I use StandardAnalyzer.the records daily ranges from 5 crore to 6 crore. for
every second i am updating my Index. i instantiate IndexSearcher object one
time for all the searches. for an hour can i see the updated records in the
indexstore by reinstantiating IndexSearcher object.but the problem wh
A couple things to make sure:
1. When you open IndexWriter, what is the analyzer you use?
StandardAnalyzer?
2. How many records are there?
3. Could you also check number of terms in your indices? If there are too
many terms, you could consider chop something in smaller piece for
example... store a
Hi testn,
here is my index details:
Index fields :5 fields
Store Fileds:10 fields
Index code:
contents=new StringBuilder().append(compCallingPartyNumber).append("
").append(compCalledPartyNumber).appen
Can you provide more info about your index? How many documents, fields and
what is the average document length?
Sebastin wrote:
>
> Hi testn,
>i index the dateSc as 070904(2007/09/04) format.i am not using
> any timestamp here.how can we effectively reopen the IndexSearcher for an
Hi testn,
i index the dateSc as 070904(2007/09/04) format.i am not using
any timestamp here.how can we effectively reopen the IndexSearcher for an
hour and save the memory because my index gets updated every minute.
testn wrote:
>
> Check out Wiki for more information at
> http://wik
Check out Wiki for more information at
http://wiki.apache.org/jakarta-lucene/LargeScaleDateRangeProcessing
Sebastin wrote:
>
> Hi All,
>i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB of
> records using MultiReader class.
>
> here is the following code snippet:
>
>
>
I think you store dateSc with full precision i.e. with time. You should
consider to index it just date part or to the resolution you really need. It
should reduce the memory it use when constructing DateRangeQuery and plus it
will improve search performance as well.
Sebastin wrote:
>
> Hi All,
29 matches
Mail list logo