uch as "DatabasePrimaryKey" or perhaps a field
>> containing an MD5 hash of document content.
>> The DuplicateFilter ensures only one document can exist in results for
>> each
>> unique value for the choice of field.
>>
>> Cheers
>> Mark
>>
>
at the time search , while querying the data
markrmiller wrote:
>
> Sebastin wrote:
>> Hi All,
>>
>> Is there any possibility to avoid duplicate records in lucene 2.3.1?
>>
> I don't believe that there is a very high performance way to do this.
>
Hi All,
Is there any possibility to avoid duplicate records in lucene 2.3.1?
--
View this message in context:
http://www.nabble.com/How-to-avoid-duplicate-records-in-lucene-tp18543588p18543588.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
---
ou are using Lucene, and provide a full traceback?
>
> Mike
>
> Sebastin wrote:
>
>>
>> Hi All,
>>
>> I am facing this error while doing Indexing text files.can anyone
>> guide me
>> how to resolve this issue.
>> --
>> View thi
Hi All,
I am facing this error while doing Indexing text files.can anyone guide me
how to resolve this issue.
--
View this message in context:
http://www.nabble.com/java.io.Ioexception-cannot-overwrite-fdt-tp18079321p18079321.html
Sent from the Lucene - Java Users mailing list archive at Nabble
a cheap operation,
> esp. not with so much data. You want to keep your IndexReaders opened for
> a while. Multiple requests/threads can share them.
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
> - Original Message --
>
>
> - Original Message
>> From: Sebastin <[EMAIL PROTECTED]>
>> To: java-user@lucene.apache.org
>> Sent: Friday, June 20, 2008 2:04:12 AM
>> Subject: creating Array of IndexReaders
>>
>>
>> Hi All,
>>
>>
>&
Hi All,
I need to create dynamic Index Readers based on the user input.
for example
if the user needs to see the records from june 17-june 20
Directory indexFsDir1 =
FSDirectory.getDirectory("C:\\200806\\17\\outgoing1", false);
IndexReader indexIR1 = IndexReader.open(indexFs
Hi All,
I need to fetch approximately 225 GB of Index Store records in a web page
.the total time to fetch the record and display to the user takes 10
minutes.is it possible to reduce the time to milliseconds
sample code snippet:
IndexReader[] readArray =
{ indexIR1, indexI
Hi All,
Does Lucene supports Billions of data in a single index store of size 14 GB
for every search.I have 3 Index Store of size 14 GB per index i need to
search these index store and retreive the result.it throws out of memory
problem while searching this index stores.
--
View this message in c
Hi All,
is there any possibility to create compression store for the
following types of string in lucene index store?
String str = "II0264.D05|00022745|ABCDE|03/01/2008 00:23:12|00035|
9840836588| 129382152520| 04F4243B600408|04F4243B600408|
|11919898456123|354943011025810L| "CPTBS2I"| "A
Hi All,
I try to store a string Variable as Field.Store.Compress,during
search is there any any inbuilt method to uncompress these records else we
can go for some other algorithm to retreive these records?
--
View this message in context:
http://www.nabble.com/Re%3ARetreive-Compressed-Fie
Hi All,
How many records needed minimum to create a index store.when i try
to create a index store with 5 records ,it creates segments file only.
--
View this message in context:
http://www.nabble.com/Minimum-records-to-create-IndexStore-tp16024349p16024349.html
Sent from the Lucene - Ja
folder in the foloowing format
"/200080301-200080316/26588"
I index and store the records in that folder.so while searching i get the
modulo and search the records only in that folder.
is it a good way of indexing?
Sebastin wrote:
>
> Hi All,
>I a
folder in the foloowing format
"/200080301-200080316/26588"
I index and store the records in that folder.so while searching i get the
modulo and search the records only in that folder.
is it a good way of indexing?
Sebastin wrote:
>
> Hi All,
>I a
Hi All,
I am going to create a Lucene Index Store of Size 300 GB per month.I
read Lucene Index Performance tips in wiki.can anyone suggest what are all
the steps need to be followed while dealing with big Indexes.My Index Store
gets updated every second.I used to search 15 days records appr
Hi All,
Is there any possibility to kill the IndexSearcher Object after every
search.
--
View this message in context:
http://www.nabble.com/how-to-kill-IndexSearcher-object-after-every-search-tf4897436.html#a14026451
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
d reduce the memory it use when constructing DateRangeQuery and
> plus it will improve search performance as well.
>
>
>
> Sebastin wrote:
>>
>> Hi All,
>>i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB of
>> records using MultiRead
possible to see
the updated records.
could you guide me how to resolve this memory problem.
testn wrote:
>
> As I mentioned, IndexReader is the one that holds the memory. You should
> explicitly close the underlying IndexReader to make sure that the reader
> releases the memory.
>
&
wrapper that return MultiReader which you can cache for a
> while and close the oldest index once the date rolls.
>
>
> Sebastin wrote:
>>
>> HI testn,
>>
>> it gives performance improvement while optimizing the Index.
>>
>> Now i seprate the IndexS
nly 15 days of indexes you need to search on,
> you just need to open only the latest 15 indexes at a time right? You can
> simply create a wrapper that return MultiReader which you can cache for a
> while and close the oldest index once the date rolls.
>
>
> Sebastin wrote:
>>
there any
other better way to improve the search performance to avoid memory problem
as well as speed of the search.
testn wrote:
>
> So did you see any improvement in performance?
>
> Sebastin wrote:
>>
>> It works finally .i use Lucene 2.2 in my application.thanks
w being used to open the index?
>
> Mike
>
> "testn" <[EMAIL PROTECTED]> wrote:
>>
>> Should the file be "segments_8" and "segments.gen"? Why is it "Segment"?
>> The
>> case is different.
>>
>>
>> S
Hi testn,
i wrote the case wrongly actually the error is
java.io.ioexception file not found-segments
testn wrote:
>
> Should the file be "segments_8" and "segments.gen"? Why is it "Segment"?
> The case is different.
>
>
> Seb
java.io.IoException:File Not Found- Segments is the error message
testn wrote:
>
> What is the error message? Probably Mike, Erick or Yonik can help you
> better on this since I'm no one in index area.
>
> Sebastin wrote:
>>
>> HI testn,
>>
while
> 3. You might consider separating indices in separated storage and use
> ParallelReader
>
>
>
> Sebastin wrote:
>>
>> The problem in my pplication are as follows:
>> 1.I am not able to see the updated records in my index
>> store
The problem in my pplication are as follows:
1.I am not able to see the updated records in my index
store because i instantiate
IndexReader and IndexSearcher class once that is in the first search.further
searches use the same IndexReaders(5 Directories) and IndexSearcher with
di
i wont close the IndexReader after the First Search.when i instantiate
IndexSearcher object will it display the updated records in that directories
Sebastin wrote:
>
> I set IndexSearcher as the application Object after the first search.
>
> her
I set IndexSearcher as the application Object after the first search.
here is my code:
if(searcherOne.isOpen()==(true)){
Directory compressDir2 =
FSDirectory.getDirectory(compressionSourceDi
gt; 2. How many records are there?
> 3. Could you also check number of terms in your indices? If there are too
> many terms, you could consider chop something in smaller piece for
> example... store area code and phone number separately if the numbers are
> pretty distributed.
>
>
ry =
parser.parse(searchQuery);
hits = is.search(callDetailquery);
testn wrote:
>
> http://wiki.apache.org/jakarta-lucene/LargeScaleDateRangeProcessing
>
>
>
> Sebastin wrote:
>>
>> Hi All,
>>i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB of
>> records using MultiReader cla
Hi Erick,
help me for this search in time efficiently.
Erick Erickson wrote:
>
> This topic has been discussed a number of times, I suggest you
> search the mail archives as that will get you very complete answers
> more quickly. See
> http://www.gossamer-threads.com/lists/lucene/java-u
Hi All,
i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB of
records using MultiReader class.
here is the following code snippet:
Directory indexDir2 =
FSDirectory.getDirectory(indexSourceDir02,false);
Hi all,
Is there any possibility to display Index values
(ie) when we want to search a field we use,
String test="9840836588"
Document doc=new Document();
doc.add(new
Field("test",test,Field.Store.NO,Field.Index.NO_NORMS);
indexWriter.a
outgoingRoute+" "+incomingRoute);
File indexDir = new File("/home/Mediation/Index");
IndexWriter indexWriter =new IndexWriter(indexDir, new StandardAnalyzer(),
true);
Document doc=new Document();
doc.add("contents",contents,Field.Store.NO,Field.Index.TOKENIZED);
doc.add(&quo
It's worth compressing your unstored "contents" field as well as your
> stored "records" field, as the unique terms in the "contents" field will
> effectively be stored.
>
> Also don't forget to convert the terms when you search too, otherwi
Hi Erick do u have any idea on this?
jm-27 wrote:
>
> Hi,
>
> I want to make my index as small as possible. I noticed about
> field.setOmitNorms(true), I read in the list the diff is 1 byte per
> field per doc, not huge but hey...is the only effect the score being
> different? I hardly mind abo
When i use the standardAnalyzer storage size increases.how can i minimize
index store
Sebastin wrote:
>
>
> String outgoingNumber="9198408365809";
> String incomingNumber="9840861114";
> String datesc="070601";
> String
indexWriter.setUseCompoundFile(true);
indexWriter.addDocument(document);
}
please help me to acheive the minimum size
Erick Erickson wrote:
>
> Show us the code
wn how you parse your query,
> anything anyone says would be a guess.
>
> But at a guess, you may be having troubles with capitalization
> in your query.
>
> Also, query.toString() will show you what the actual Lucene
> query looks like.
>
> Best
> Erick
>
>
Hi Does anyone give me an idea to reduce the Index size to down.now i am
getting 42% compression in my index store.i want to reduce upto 70%.i use
standardanalyzer to write the document.when i use SimpleAnalyzer it reduce
upto 58% but i couldnt search the document.please help me to acheive.
Tha
could you briefly tell me how to write two analyzers for the two field
Paulo Silveira-3 wrote:
>
> On 5/25/07, karl wettin <[EMAIL PROTECTED]> wrote:
>>
>> PerFieldAnalyzerWrapper
>>
>
> that was fast! thanks!
>
>
>> http://lucene.zones.apache.org:8080/hudson/job/Lucene-Nightly/javadoc/
>> or
Hi All,
i index my document using SimpleAnalyzer() when i search the Indexed
field in the searcher class it doesnt give me the results.help me to sort
out this issue.
My Code:
test="9840836598"
test1="bch01"
testRecords=(test+" "+test1);
document.add("testRecords",testRecords,Field.Store
Hi Hossman,
Thanks for your reply.when i index the search fields in my
lucene document,it occupy 20% of the original size.how can i reduce the
reduce the index size.
hossman_lucene wrote:
>
>
> : I need to store all the attributes of the document i index as part of
> the
> : inde
45 matches
Mail list logo