What kind of failures do you get? And I'm confused by the code. Are
you creating a new IndexWriter every time? Do you ever close it?
It'd help to see the surrounding code...
Best
Erick
On Sat, Mar 28, 2009 at 1:36 PM, Raymond Balmès wrote:
> Hi guys,
>
> I'm using a SinkTokenizer to collect som
Hi guys,
I'm using a SinkTokenizer to collect some terms of the documents while doing
the main document indexing
I attached it to a specific field (tokenized, indexed).
*
writer* = *new* IndexWriter(index, *my _analyzer*, create,
*new*IndexWriter.MaxFieldLength(100));
doc.add(new Field("cont
Hallo Timon,
Lucene 1.4.3 is now many years old and you should really use the new
version. The German analyzers and stemmers and so on are in a contrib
package (see below contrib subdirectory of the binary download), choose the
correct JAR files and add them to your classpath. It is also recommend
On Mar 28, 2009, at 5:01 AM, Timon Roth wrote:
hello luceners
i have installed lucene on my linux-debian testing. so there is the
jarfile
lucene-1.4.3.jar under /usr/share/java.
so far so god. there is a german stemmer and a german analyzer in it
under
org.apache.lucene.analysis.de who
Hallo Timon,
Lucene 1.4.3 is now many years old and you should really use the new
version. The German analyzers and stemmers and so on are in a contrib
package (see below contrib subdirectory of the binary download), choose the
correct JAR files and add them to your classpath. It is also recommend
hello luceners
i have installed lucene on my linux-debian testing. so there is the jarfile
lucene-1.4.3.jar under /usr/share/java.
so far so god. there is a german stemmer and a german analyzer in it under
org.apache.lucene.analysis.de who works pretty well.
but the official release eg. from