no idea about the webapp demo app, but are you sure you have all the
required files like the jar in the right place?
On Sat, Jun 27, 2009 at 9:50 PM, mayank juneja wrote:
> Hi
>
> I am a new user to Lucene.
>
> I tried running the Lucene web application demo provided with the source. I
> am able
On 1 Jul 2009, at 17:39, k.sayama wrote:
I could verify Token byte offsets
The sytsem outputs
aaa:0:3
bbb:0:3
ccc:4:7
That explains the highlighter behaviour. Clearly BBB is not at
position 0-3 in the String you supplied
String CONTENTS = "AAA :BBB CCC";
Looks like the Tokenizer need
Hi there,
On Wed, Jul 1, 2009 at 7:52 PM, John Seer wrote:
>
> Hi,
>
> I have docs in my index like:
>
> name: open & close
> name water fall\
> name: play-end-go
>
> I am using KeywordAnalyzer to index docs and for querying
>
> term: play-end-go
>
> Query qp= new QueryParser("name", new KeywordAn
Hi,
I have docs in my index like:
name: open & close
name water fall\
name: play-end-go
I am using KeywordAnalyzer to index docs and for querying
term: play-end-go
Query qp= new QueryParser("name", new KeywordAnalyzer()).parse(term);
After I am doing this I am getting error about - and if m
On Wed, Jul 1, 2009 at 7:27 PM, John Seer wrote:
>
> Hello,
> I am using KeywordAnalyzer for one of the fields and have problem with it.
> When my original term has not English characters as well as - & \ /.
What are you problems? Can you elaborate this a little :)
> Is there any alternative for
Hello,
I am using KeywordAnalyzer for one of the fields and have problem with it.
When my original term has not English characters as well as - & \ /.
Is there any alternative for this. Or how to solve the issue with
characters?
Thanks
--
View this message in context:
http://www.nabble.com
I could verify Token byte offsets
The sytsem outputs
aaa:0:3
bbb:0:3
ccc:4:7
offset is initialized
Is this problem Analyzer? Or, is it Tokenizer?
- Original Message -
From: "mark harwood"
To:
Sent: Thursday, July 02, 2009 12:55 AM
Subject: Re: Highligheter fails using JapaneseAnaly
>>How should I verify it?
Make sure the Token.startOffset and endOffset properties of Tokens produced by
your TokenStream correctly define the location of Token.termBuffer in the
original text.
- Original Message
From: k.sayama
To: java-user@lucene.apache.org
Sent: Wednesday, 1 Ju
Sorry
I can not verify the Token byte offsets produced by JapaneseAnalyzer
How should I verify it?
- Original Message -
From: "mark harwood"
To:
Sent: Wednesday, July 01, 2009 11:31 PM
Subject: Re: Highligheter fails using JapaneseAnalyzer
Can you verify the Token byte offsets pro
Hi agree that faceting might be the thing that defines this app. The app is
mostly snappy during daytime since we optimize the index around 7.00 GMT.
However faceting is never snappy.
We speeded things up a whole bunch by creating various "less cardinal"
fields from the originating publishedDate w
Can you verify the Token byte offsets produced by this particular analyzer are
correct?
- Original Message
From: k.sayama
To: java-user@lucene.apache.org
Sent: Wednesday, 1 July, 2009 15:22:37
Subject: Re: Highligheter fails using JapaneseAnalyzer
hi
I verified it by using SimpleAn
hi
I verified it by using SimpleAnalyzer, StandardAnalyzer, and CJKAnalyzer.
but, The problem did not happen.
I think the problem of JapaneseAnalyzer.
Can this problem be solved?
Does the same thing happen when you use SimpleAnalyzer, or
StandardAnalyzer?
I have a sneaking suspicion that the
Have you looked at the Hibernate Search declarative filter feature which
is some bells and whistles on top of the Lucene filter feature.
Typically you would keep the credential levels in the document and
filter by the user's credential.
On Wed, 2009-07-01 at 07:55 -0400, Bryan Doherty wrote:
> C
Hi all
Thanks for your response. I guess when I add or update using the
indexwriter I need to do the following:
}finally {
if (IndexReader.isLocked(directory)) {
IndexReader.unlock(directory);
}
}
Cheers
Amin
On Wed, Jul 1, 2009 at 11:47 AM, Simon Willnauer <
simon.willna...@goog
Currently I am using Sybase with Hibernate for my database needs. I've been
implementing Hibernate Search (HS) and it works very well. I use Sybase
because of the Row Level Security package. But that is suddenly not valid
when using HS because it blindly indexes the data. Is there a way to
simu
On Tue, 2009-06-30 at 22:59 +0200, Marcus Herou wrote:
> The number of concurrent users today is insignficant but once we push
> for the service we will get into trouble... I know that since even one
> simple faceting query (which we will use to display trend graphs) can
> take forever (talking abo
You might want to take care of the write.lock file in the index
directory if your application breaks down. If you do not close the
writer and restart you app you might get an LockObtainFailedException.
simon
On Wed, Jul 1, 2009 at 12:39 PM, Ganesh wrote:
> Yes. Single IndexWriter could be maintai
Yes. Single IndexWriter could be maintained in a App and it could be closed
when the App is shutdown.
Regards
Ganesh
- Original Message -
From: "Amin Mohammed-Coleman"
To:
Sent: Wednesday, July 01, 2009 1:27 PM
Subject: IndexWriter
> Hi
>
> This question has probably been asked be
Sorry, yes, this was my fault with the indexing speedups in 2.3
(LUCENE-843): as of 2.3, if any fields have term vectors enabled, the
fields are sorted lexicographically. As of 2.4 (LUCENE-1301,
refactoring the indexing core), that sort happens even without term
vectors.
Hoss I see you've opened
Hi
This question has probably been asked before so apologies for asking it
again. Just to confirm that it is ok to use a single index writer in a web
application and only close that single instance on application shutdown? As
the indexwriter is thread safe there is no need for any external
synch
20 matches
Mail list logo