Re: Lucene nrt

2015-07-20 Thread Yonik Seeley
Yes, if you do a commit with waitSearcher=true (and it succeeds) then any adds before that point will be visible. -Yonik On Mon, Jul 20, 2015 at 8:25 PM, Bhawna Asnani wrote: > Hi, > I am using solr to update a document and read it back immediately through > search. > > > I do softCommit my cha

Lucene nrt

2015-07-20 Thread Bhawna Asnani
Hi, I am using solr to update a document and read it back immediately through search. I do softCommit my changes which claims to use lucene's indexReader using indexWritter which was used to write teh document. But there are times when Itheget a stale document back even with waitSearcher=true. D

Lucene 5.2.0 global ordinal based query time join on multiple indexes

2015-07-20 Thread Alex Pang
Hi, Does the Global Ordinal based query time join support joining on multiple indexes? >From my testing on 2 indexes with a common join field, the document ids I get back from the ScoreDoc[] when searching are incorrect, though the number of results is the same as if I use the older join quer

Re: StandardTokenizer#setMaxTokenLength

2015-07-20 Thread Steve Rowe
Hi Piotr, The behavior you mention is an intentional change from the behavior in Lucene 4.9.0 and earlier, when tokens longer than maxTokenLenth were silently ignored: see LUCENE-5897[1] and LUCENE-5400[2]. The new behavior is as follows: Token matching rules are no longer allowed to match aga

Re: StandardTokenizer#setMaxTokenLength

2015-07-20 Thread Piotr Idzikowski
Hello. Btw, I think ClassicAnalyzer has the same problem Regards On Fri, Jul 17, 2015 at 4:40 PM, Steve Rowe wrote: > Hi Piotr, > > Thanks for reporting! > > See https://issues.apache.org/jira/browse/LUCENE-6682 > > Steve > www.lucidworks.com > > > On Jul 16, 2015, at 4:47 AM, Piotr Idzikowski

Re: StandardTokenizer#setMaxTokenLength

2015-07-20 Thread Piotr Idzikowski
I should add that this is Lucene 4.10.4. But I have checked it on the 5.2.1 version and I have got the same result Regards Piotr On Mon, Jul 20, 2015 at 9:44 AM, Piotr Idzikowski wrote: > Hello Steve, > It is always pleasure to help you develop such a great lib. > Talking about StandardTokenize

Re: StandardTokenizer#setMaxTokenLength

2015-07-20 Thread Piotr Idzikowski
Hello Steve, It is always pleasure to help you develop such a great lib. Talking about StandardTokenizer and setMaxTokenLength, I think I have found another problem. It looks like when the word is longer than max length analyzer adds two tokens -> word.substring(0,maxLength) and word.substring(maxL