Hi,
May be you can consider using Compass (http://www.opensymphony.com/compass/)
which could help you in your situation. They claim that some actions (like
updating the index very often) are treated in a very efficient way (due to
caching which is not a native part of Lucene library).
Regards,
L
On Wednesday 13 December 2006 14:10, abdul aleem wrote:
> a) Indexing large file ( more than 4MB )
> Do i need to read the entire file as string using
> java.io and create a Document object ?
You can also use a reader:
http://lucene.apache.org/java/2_0_0/api/org/apache/lucene/document/Fiel
Many thanks Erick,
Your points are valid, i was thinking entire Log file
as a lucene document, im wrong trying to chop the log
file might be the way to go
my bad expressions , yes you got that right
timestamp must be added as a "FIELD" that is what i
meant
really appreciate your detailed reply,
Let me take a crack at it. See below...
On 12/13/06, abdul aleem <[EMAIL PROTECTED]> wrote:
Hello All,
Apolgies if it is a naive question
a) Indexing large file ( more than 4MB )
Do i need to read the entire file as string using
java.io and create a Document object ?
Essentially yes.
Hello All,
Apolgies if it is a naive question
a) Indexing large file ( more than 4MB )
Do i need to read the entire file as string using
java.io and create a Document object ?
The file contains timestamp, if i need to index on
timestamp is parsing the entire file manually
(tokeni