Has anyone done any benchmarking of Lucene running with the index
stored on a SSD?
Given the performance characteristics quoted for, say, the SANDISK
devices (eg
http://www.sandisk.com/OEM/ProductCatalog(1321)-SanDisk_SSD_SATA_5000_25.aspx:
7000 IO/sec for 512 byte requests, 67MB/sec sustained re
- Using Spring Module 0.8a
- Using RAM directory
- Having about 100,000 documents
- Index all documents in one thread
- Perform the optimize only at the end of the indexing process
- Using Lucene 2.2
Dmitry-17 wrote:
>
> What the conditions you are following when running lucene - like
> config
What the conditions you are following when running lucene - like
configuration, parameters..can you describe more?
thanks,
dt,
www.ejinz.com
Search Engine News
- Original Message -
From: "testn" <[EMAIL PROTECTED]>
To:
Sent: Friday, July 27, 2007 7:50 PM
Subject: NPE in MultiReader
Every once in a while I got the following exception with Lucene 2.2. Do you
have any idea?
Thanks,
java.lang.NullPointerException
at
org.apache.lucene.index.MultiReader.getFieldNames(MultiReader.java:264)
at
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:180
I guess this also ties in with 'getPositionIncrementGap', which is relevant
to fields with multiple occurrences.
Peter
On 7/27/07, Peter Keegan <[EMAIL PROTECTED]> wrote:
>
> I have a question about the way fields are analyzed and inverted by the
> index writer. Currently, if a field has multiple
I have a question about the way fields are analyzed and inverted by the
index writer. Currently, if a field has multiple occurrences in a document,
each occurrence is analyzed separately (see DocumentsWriter.processField).
Is it safe to assume that this behavior won't change in the future? The
reas
Hi guys,
I would like to know if exist some limit of size for the fields of a
document.
I'm with the following problem:
When a term is after a certain amount of characters (approximately 87300) in
a field, the search does not find de occurrency.
If I divide my field in pages, the terms are found
Actually no,
Because I'd like to retrieve terms that were computed on the same
instance of Field. Taking your example to ilustrate better, I have 2
documents, on documentA I structured one field, Field("fieldA", "termA
termB", customAnalyzer). On documentB I structured 2 fields, Field("fieldA",
27 jul 2007 kl. 13.43 skrev miztaken:
Can you use IndexWriter#deleteDocument instead?
No i cant use this method.
I dont know docid and i dont wanna search for it. It will only add
extra
time.
I am deleting the document on the basis of unique key field.
You can do that with IndexWriter#
>Can you use IndexWriter#deleteDocument instead?
No i cant use this method.
I dont know docid and i dont wanna search for it. It will only add extra
time.
I am deleting the document on the basis of unique key field.
>Can you please supply an isolated and working test case that
>demonstrate yo
Hello,
>
> Company AB", ...). With this I´d like to search for documents that has
> daniel and president on the same field, because in a same
> text, can exist
> daniel and president in different fields. Is this possible??
Not totally sure wether I understand your problem, because it does not s
27 jul 2007 kl. 10.50 skrev miztaken:
My application simply shut downs.
After that when i try to open the same index using IndexReader and
fetch the
document then it says "trying to access deleted document". After
getting
such error, i opened the indexWriter, optimized and then closed it.
12 matches
Mail list logo