Bug on doc parameter in CustomScoreQuery.customScore()

2009-12-28 Thread Paul chez Jamespot
Hello, I'm trying to use the doc parameter to build a customScore, but the 'doc' value seems to be different from the global 'docId' when the index is not optimized. Basically, I create a DateScoreQuery passing the IndexReader and the field containing the timestamp (as long) And I use t

Re: relation among Terms included in Query

2009-12-28 Thread Smith G
Hi, Thank you very much.. that should help.. meanwhile I run cases and write again if I get stuck once more in the same context. Thanks 2009/12/28 AHMET ARSLAN : >> Hello All, >>             I have observed >> extractTerms() in the class >> org.apache.lucene.search.Query which returns set

Re: Multi-value (complex) field indexing

2009-12-28 Thread Erick Erickson
I'm not following entirely here, but multi-valued fields are supported. Something like (bad pseudo-code here) doc = new Document doc.add(new Field("rows", wrote: > *Problem description* > > - I have a complex multi-value field. So, each value consist from several > rows. > - Each rows consi

RE: Compressing field content with Lucene 3.0

2009-12-28 Thread Uwe Schindler
If it is a 2.4 index, you can read it without any problems. It is only no longer possible to add fields with Field.Store.COMPRESS. Nothing more changed. If you want to add field with some compression, you have to compress yourself e.g. to a byte[]. You can then add this byte[] as a binary stored f

Compressing field content with Lucene 3.0

2009-12-28 Thread Ivan Vasilev
Hi Guys, Could you give me advice how to deal with Lucene 3.0 with 2.4 indexes that contain compressed data. Our case is following - we have code like this: Field.Store fieldStored = storedFieldsSet.contains(fieldName) ? (fieldValue.length() >= COMPRESS_THRESHOLD ? Field.Store.COMPRESS : Fi

Multi-value (complex) field indexing

2009-12-28 Thread Leonid M.
*Problem description* - I have a complex multi-value field. So, each value consist from several rows. - Each rows consists from several cells/items I want to be able to match those issues, which have a *row* with cellA="AAA" and cellB="BBB". Having a search by all the table (meaning - a

Re: relation among Terms included in Query

2009-12-28 Thread AHMET ARSLAN
> Hello All, >             I have observed > extractTerms() in the class > org.apache.lucene.search.Query which returns set of terms > extracted > from user input query. Is there any chance of getting the > connecting-operator between all those terms. for example.. > Term1 OR > Term2 AND Term3 .. 

relation among Terms included in Query

2009-12-28 Thread gudumba l
Hello All, I have observed extractTerms() in the class org.apache.lucene.search.Query which returns set of terms extracted from user input query. Is there any chance of getting the connecting-operator between all those terms. for example.. Term1 OR Term2 AND Term3 ..or Term1 AND Te

RE: Using the new tokenizer API from a jar file

2009-12-28 Thread Uwe Schindler
I opened https://issues.apache.org/jira/browse/LUCENE-2182 about this problem and already have a fix. This is really a bug. The solution is simple because you have to load the IMPL class using the same classloader as the passed in interface. The default for Class.forName is the classloader of Attr

Re: CANNOT use a * or ? symbol as the first character of a search.

2009-12-28 Thread Anshum
Hi, Don't worry! there always are ways! Is prefix query is what you are trying to run? They would run but would be highly unoptimized as lucene stores terms in a lexically sorted manner in its index. * like query terms are allowed by the parser/searcher though. Possible solution for this would be:

Re: CANNOT use a * or ? symbol as the first character of a search.

2009-12-28 Thread Shashi Kant
You can enable that by QueryParser.setAllowLeadingWildcard( true ) On Mon, Dec 28, 2009 at 2:46 AM, liujb wrote: > > oh,my god, > > Query Parser Syntax > > CANNOT use a * or ? symbol as the first character of a search. > > that's mean I can't wrinte a search string like '*test'. this will be ca

RE: Using the new tokenizer API from a jar file

2009-12-28 Thread Uwe Schindler
The question on this list was ok,as it shows a minor problem of using the new TokenStream API with Solr. His plugin was loaded correctly, because if Lucene says, that it cannot find the *Impl class, it was able to load the interface class before -> the JAR file is "visible" to the JVM. The proble