While using the Lucene WordNet package, we found that the Syns2Index program
indexes the Synsets wrongly. For example, looking up the synsets for the
word "king", we get:
java SynLookup wnindex king
baron
magnate
mogul
power
queen
rex
scrofula
struma
tycoon
Here, "scrofula" and "struma" are extra
hi, it will work because it will also decompound "Rindfleish" into Rind and
fleish, with posIncr=0
so if you index Rindfleischüberwachungsgesetz, then query with "Rindfleish",
its matching because Rindfleish also gets decompounded into Rind and fleish.
On Tue, Oct 20, 2009 at 8:35 PM, Benjamin Do
Hi,
We are using lucene 1.4.3, sometimes we encounter an error when creating
Searcher object with IOException: "Already closed".
I searched lucene message archive but did not see conclusive answer, any
help would be very appreciated.
Best regards, Lisheng
--
Hello,
I've found a number of posts in different places talking about how to perform
decompounding, but I haven't found too many discussing how to use the results
of decompounding. If anyone can answer this question or point me to an existing
discussion it would be very helpful.
In the descrip
> From: Uwe Schindler [mailto:u...@thetaphi.de]
> TokenStream.close() is called (and was everytime called
> before, too), when the tokenization is done to close the
> Reader. The call to reset(Reader) is the same like creating a
> new instance (only that the cost of creating a new instance
>
Aha, my bad - I looked on ViewVC at the 2.9.0 *tag*, not the 2.9 *branch*, and
LUCENE-1955 emails went in one speaker and out the other.
Steve
> -Original Message-
> From: Michael McCandless [mailto:luc...@mikemccandless.com]
> Sent: Tuesday, October 20, 2009 6:20 PM
> To: java-user@luce
That update to the Hits javadoc didn't make 2.9.0, but will be in
2.9.1 (it's committed to the 2.9.x branch now).
Mike
On Tue, Oct 20, 2009 at 6:00 PM, Steven A Rowe wrote:
> Hi Yonik,
>
> Hmm, in what version of Hits do you see this updated javadoc? In the 2.9.0
> version, the only change in
Hi Yonik,
Hmm, in what version of Hits do you see this updated javadoc? In the 2.9.0
version, the only change in the Hits javadoc from the 2.4.1 version in this
section is that it refers to TopScoreDocCollector instead of TopDocCollector:
http://lucene.apache.org/java/2_9_0/api/core/org/apache
Hmm, yes, I should have thought of quoting the havadoc :-)
The Hits javadoc has been udpated though... we shouldn't be pushing
people toward collectors unless they really need them:
* TopDocs topDocs = searcher.search(query, numHits);
* ScoreDoc[] hits = topDocs.scoreDocs;
* for (int i =
Hi Nathan,
On 10/20/2009 at 5:03 PM, Nathan Howard wrote:
> This is sort of related to the above question, but I'm trying to update
> some (now depricated) Java/Lucene code that I've become aware of once we
> started using 2.4.1 (we were previously using 2.3.2):
>
> Hits results = MultiSearcher.s
On Tue, Oct 20, 2009 at 5:03 PM, Nathan Howard wrote:
> This is sort of related to the above question, but I'm trying to update some
> (now depricated) Java/Lucene code that I've become aware of once we started
> using 2.4.1 (we were previously using 2.3.2):
>
> Hits results = MultiSearcher.search
This is sort of related to the above question, but I'm trying to update some
(now depricated) Java/Lucene code that I've become aware of once we started
using 2.4.1 (we were previously using 2.3.2):
Hits results = MultiSearcher.search(Query));
int start = currentPage * resultsPerPage;
int stop =
TokenStream.close() is called (and was everytime called before, too), when
the tokenization is done to close the Reader. The call to reset(Reader) is
the same like creating a new instance (only that the cost of creating a new
instance is not needed).
The change in Solr 1.4 is that now TokenStreams
2009/10/20 Teruhiko Kurosaka :
> My Tokenizer started showing an error when I switched
> to Solr 1.4 dev version. I am not too confident but
> it seems that Solr 1.4 calls close() on my Tokenizer
> before calling reset(Reader) in order to reuse
> the Tokenizer. That is, close() is called more tha
Hi,
My Tokenizer started showing an error when I switched
to Solr 1.4 dev version. I am not too confident but
it seems that Solr 1.4 calls close() on my Tokenizer
before calling reset(Reader) in order to reuse
the Tokenizer. That is, close() is called more than
once.
The API doc of close() rea
Hi,
We are using lucene 1.4.3, sometimes we encounter an error when creating
Searcher object with IOException: "Already closed".
I searched lucene message archive but did not see conclusive answer, any
help would be very appreciated.
Best regards, Lisheng
---
You have to reindex everything, the field cannot be converted. Updating
single fields is not possible (maybe in future, see wiki).
If you use the field cache (sorting against numeric fields) it is very
important to not have any legacy terms in the index, so the best is to start
with an empty index
We have an index with a number field indexed as String field. We do
range searches as well as sorting on this field. Now we want to take
advantage of the NumericField. The question is, will I have to re-index
all the documents or just adding a new document with NumericField will
be enough to
18 matches
Mail list logo