Hi,
I am not able to add synonyms to the lucene index.
I condensed my problem into the following class which is based on a Hello World
example.
The idea behind the code was to add a document with universität and the synonym
'Hochschule' (highschool)
so that lucene finds universität wenn I query
The Points data structures are completley different and distinct
from the Term Index structures used by LegacyNumeric fields -- just having
hte backwards codex (or using merges to convert indexes to the new index
format) isn't enough -- you have to reindex.
-Hoss
http://www.lucidworks.com/
Hi, I hope someone can help me.
I have a project which uses Lucene and I have been upgrading it from
version 4.10.4 to 6.6.0, so I upgraded my indexes which were created in
4.10.0, using the terminal, to version 5.0.0 like the migration guide
tells.
Due to the upgrade I changed LegacyNumericRange
hariram,
Until Lucene 6.2, there was no way for the classic query parser to *not* first
split on whitespace before sending text to the analyzer. As a result, filters
like ShingleFilter that operate on multiple tokens will only see one token at a
time; in your example: first “cup” as the full t
Hi Steve,
I'm sorry. That's also CustomAnalyzer.
public class CustomAnalyzer extends Analyzer {
> @Override
> protected Analyzer.TokenStreamComponents createComponents(final String
> fieldName, final Reader reader) {
> final WhitespaceTokenizer src = new WhitespaceTokenizer(get
Hi hariram,
There may be other problems, but at a minimum you have two different analysis
classes here. You’re printing the output stream from one
(CustomSynynymAnalyzer, the source of which is not shown in your email), but
constructing a query from a different one (CustomAnalyzer).
--
Steve
I'm using Lucene 4.10.4 and trying to construct (shingles) combinations of
tokens.
Code:
public class CustomAnalyzer extends Analyzer {
@Override
protected Analyzer.TokenStreamComponents createComponents(final String
fieldName, final Reader reader) {
final WhitespaceTokenizer src