Re: Re: Re: Lucene search problem

2008-12-23 Thread tom
AUTOMATIC REPLY LUX is closed until 5th January 2009 - To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org For additional commands, e-mail: java-user-h...@lucene.apache.org

Re: Re: Lucene search problem

2008-12-23 Thread tom
AUTOMATIC REPLY LUX is closed until 5th January 2009 - To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org For additional commands, e-mail: java-user-h...@lucene.apache.org

Re: Lucene search problem

2008-12-23 Thread amar . sannaik
hi Erick, I agree lucene do not index the object. in the following example I have quoted fields are indexed as chain.chainName. I am able to retrieve recipe objects using FullTextQuery as "chain.chainName:something' ... question is in somecase chain itself is null. I can be able to achieve require

Re: Optimize and Out Of Memory Errors

2008-12-23 Thread Mark Miller
Mark Miller wrote: Lebiram wrote: Also, what are norms Norms are a byte value per field stored in the index that is factored into the score. Its used for length normalization (shorter documents = more important) and index time boosting. If you want either of those, you need norms. When norms

Re: Optimize and Out Of Memory Errors

2008-12-23 Thread Mark Miller
Lebiram wrote: Also, what are norms Norms are a byte value per field stored in the index that is factored into the score. Its used for length normalization (shorter documents = more important) and index time boosting. If you want either of those, you need norms. When norms are loaded up into a

Re: Optimize and Out Of Memory Errors

2008-12-23 Thread mark harwood
>>how do I turn off norms and where is it set? doc.add(new Field("field2", "sender" + i, Field.Store.NO, Field.Index.ANALYZED_NO_NORMS)); - Original Message From: Lebiram To: java-user@lucene.apache.org Sent: Tuesday, 23 December, 2008 17:03:07 Subject: Re: Opt

Re: Optimize and Out Of Memory Errors

2008-12-23 Thread Lebiram
Hi All, Thanks for the replies, I've just managed to reproduced the error on my test machine. What we did was, generate about 100,000,000 documents with about 7 fields in it, with terms from 1 to 10. After the index of about 20GB, we did an optimize and it was able to make 1 big index of th

Re: lucene explanation

2008-12-23 Thread Chris Salem
That worked perfectly. Thanks alot! Sincerely, Chris Salem - Original Message - To: java-user@lucene.apache.org From: Erick Erickson Sent: 12/22/2008 5:00:51 PM Subject: Re: lucene explanation Warning! I'm really reaching on this But it seems you could use TermDocs/TermEnum to

Re: Optimize and Out Of Memory Errors

2008-12-23 Thread mark harwood
I've had reports of OOM exceptions during optimize on a couple of large deployments recently (based on Lucene 2.4.0) I've given the usual advice of turning off norms, providing plenty of RAM and also suggested setting IndexWriter.setTermIndexInterval(). I don't have access to these deployment en

Re: QueryWrapperFilter

2008-12-23 Thread Erick Erickson
My first bit of advice would be to step back and take a deep breath and "take off your DB hat". Lucene is a *text* search application, not an RDBMS. The usual solution is to flatten your data representation when you index so you can use simpler searches. Others have posted that it's hard to use Lu

Re: Lucene search problem

2008-12-23 Thread Erick Erickson
How do you intend to index these? Lucene will not index objects for you. You have to break the object down into a series of fields. At that point you can substitute whatever you want. Best Erick On Tue, Dec 23, 2008 at 3:36 AM, wrote: > Hi Aaron Schon/EricK, > > That really make sense to me but

Re: Combining results of multiple indexes

2008-12-23 Thread Erick Erickson
You're kind of in uncharted territory. I've been watching this list for quite a while and you're the first person I remember who's said "indexing speed is more important than querying speed" . Mostly I'll leave responses to folks who understand the guts of indexing, except to say that for point (e

Re: Optimize and Out Of Memory Errors

2008-12-23 Thread Michael McCandless
How many indexed fields do you have, overall, in the index? If you have a very large number of fields that are "sparse" (meaning any given document would only have a small subset of the fields), then norms could explain what you are seeing. Norms are not stored sparsely, so when segments g

Re: Multiple IndexReaders from the same Index Directory - issues with Locks / performance

2008-12-23 Thread Michael McCandless
Locking is completely unused from IndexReader unless you do deletes or change norms, so sharing a remote mounted index is just fine (except for performance concerns). If you're using 2.4, you should open your readers with readOnly=true. Mike Tomer Gabel wrote: Ultimately it depends on

Re: Multiple IndexReaders from the same Index Directory - issues with Locks / performance

2008-12-23 Thread Tomer Gabel
Ultimately it depends on your specific usage patterns. Generally speaking, if you have IndexReaders (and do not use their delete functionality) you don't need locking at all; you can use a no-op lock factory, in which case you'll pretty much only be constrained by your storage subsystem. Kay Kay

QueryWrapperFilter

2008-12-23 Thread csantos
Hello, I need to filter a FullTextSearch against a query, that means, i search a term in a indexed entity "A", A contains a embedded Index "B", entity B has a m:1 bidirectional relationship with entity "C", the foreign Key in "B" is "c_id". My filter condition would be like "filter the fulltext

Re: Lucene search problem

2008-12-23 Thread amar . sannaik
Hi Aaron Schon/EricK, That really make sense to me but it really seems easy if is the string object. See the object structure I have it below hopefully that gives you some idea class Recipe { @DocumentId Integer id; @IndexedEmbedded Chain chain; //gettter and setter } class Chain { @DocumentId I