On Dec 29, 2008, at 11:25 AM, Girish Naik wrote:
Thanks Grant I will check this out.
BTW, as far as Lucene version is concerned I had checked out the svn
of lucene and created a build its version says as 2.9 :) . And Luke
is of version 0.9.1
You will need to plug in your own Lucene jar
Thanks Grant I will check this out.
BTW, as far as Lucene version is concerned I had checked out the svn of
lucene and created a build its version says as 2.9 :) . And Luke is of
version 0.9.1
Regards,
Please do not print this email
unless it is absolutely necessary.
On Dec 29, 2008, at 9:59 AM, Girish Naik wrote:
FIELD_BODY is defined as
public static final String FIELD_BODY = "AVS_FIELD_BODY";
and its indexed as
ParsedDoc webdoc = ParsedDoc.getDoc(page);
...
document.add(new Field(Constants.FIELD_BODY, webdoc.getContents(),
Field.Store.NO, Field.Index.
What does the FIELD_BODY look like? You search is apparently going
against that Field, but you don't show how it is indexed.
Have you looked at your index in Luke yet? http://www.getopt.org/luke?
On Dec 29, 2008, at 8:19 AM, Girish Naik wrote:
Sorry for that,
Here is how the Analyzer is
Sorry for that,
Here is how the Analyzer is Selected:
public static Analyzer getAnalyzerInstance(String localeKey) {
Analyzer analyzer = null;
if (localeKey == null || localeKey.trim().equals("")) {
localeKey = AppContext.getSetting("defaultLocale");
System.out.println
Hi Girish,
Can you provide some sample code and info about what isn't working?
All you have said so far is that the Arabic Analyzer doesn't work for
you, but you have said nothing about how you are actually using it.
Are you getting exceptions? Do the tokens not look right? Are no
res
Hi,
I am having a hard time in indexing the Arabic content and
searching the same via Lucene. I have also used a Arabic Analyzer from
the Lucene package but had no luck. I have also used a snowball jar but
it doesn't contain an Arabic stemmer. So i had put the Lucene Arabic
Stemmer in snowb