Hi Steven.
When i access to this address, this message appread
Forbidden
You don't have permission to access /servlets/ProjectHome on this server.
What's the problem?
Thakns.
Steven Rowe wrote:
>
> Mahdi Rahimi wrote:
>> Hi.
>>
>> How can I access JavaCC??
>>
>> Thanks
>
> https://javacc
I see.
I guess those Filters (e.g. PorterStemFilter) that make up the analyzer
are not thread safe or cannot be shared.
Thanks for your quick response!
Jay
Yonik Seeley wrote:
On 6/22/07, Jiye Yu <[EMAIL PROTECTED]> wrote:
I guess an Analyzer (built in ones such as StandardAnalyzer,
POrterSt
On 6/22/07, Jiye Yu <[EMAIL PROTECTED]> wrote:
I guess an Analyzer (built in ones such as StandardAnalyzer,
POrterStemAnalyer and etc) is not thread safe.
Analyzers *are* thread-safe.
Multiple threads can all call analyzer.tokenStream() without any
synchronization.
-Yonik
Hi,
I guess an Analyzer (built in ones such as StandardAnalyzer,
POrterStemAnalyer and etc) is not thread safe. But I wonder if it's ok
to share the same analyzer object within a thread. For example, if I
want to create a PerFieldAnalyzer for 5 fields, can I use the same
Analyzer object for a
Actually this issue should be closed. I will close it.
As far as I know, Lucene should now work over NFS, except you will
have to make a custom deletion policy that works for your application.
Lucene had issues with NFS in three areas: locking, stale client-side
file caches and how NFS handles
http://issues.apache.org/jira/browse/LUCENE-673
This says the NFS mount problem is still open, is this the case?
Has anyone been able to deal with this adequately?
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional
Hi Rob,
Robert Walpole wrote:
> At the moment I am attempting to do this as follows...
>
> analyzer = new PorterStemAnalyzer();
> parser = new QueryParser("content", analyzer);
> Query query = parser.parse("keywords: relaxing");
> Hits hits = idxSearcher.search(query);
>
> ...but this is not ret
Yes, you should also stem the query terms. Otherwise, you'll have
indexed "working" as "work", but your search for "working" will look
for "working" and won't match. Which is not what you want, I'm sure.
Query.toString() will tell you a lot about how queries are
processed, BTW
In general, un
Hi,
I am using the PorterStemAnalyzer class (attached) to provide stemming
for a Lucene index.
To stem the terms in the index we use the following...
//open an index writer in append mode
IndexWriter idxWriter = new IndexWriter(LUCENE_INDEX_PATH, new
PorterStemAnalyzer(), false);
//add the luce
Steve,
i use your idea it works for me great,once again i say thanks to
you.But when i use
(Index.No_NORMS ) it increase the size in the same time
when i use(Index.TOKENIZED)it will reduce the size.
i use the code given by you
BigInteger _bi = new java
Hi Li,
Sorry for taking so long to answer your questions.
We came up with splitting our index into smaller units after we realized
that we have to deal with an index of the size of many GB. Updating and
optimizing such large files becomes a bottle neck. We portioned our
index based on when the
Hi,
I am using your tool a lot and it helps me tremendously analyzing our
different indexes. I highly appreciate your work and effort you put
into this tool.
This is an enhancement suggestion.
It would be great if Luke could remember the following:
1) Which Analyzer last time was selected. I
Outstanding!
Erick
On 6/22/07, Andrzej Bialecki <[EMAIL PROTECTED]> wrote:
Hi all,
I just released Luke 0.7.1, the Lucene Index Toolbox. As usually, you
can get it here:
http://www.getopt.org/luke/
This minor release is mostly an upgrade to the official Lucene 2.2.0
release JARs.
T
Hi all,
I just released Luke 0.7.1, the Lucene Index Toolbox. As usually, you
can get it here:
http://www.getopt.org/luke/
This minor release is mostly an upgrade to the official Lucene 2.2.0
release JARs.
The following changes have been made in this release:
* Added a term distributio
Look at Solr's snapshooter script. It uses hard links (via cp -lr ...) to
create index snapshots. You could use those for backups.
Otis
--
Lucene Consulting -- http://lucene-consulting.com/
- Original Message
From: "Rajendranath, Divya" <[EMAIL PROTECTED]>
To: java-user@lucene.apach
Hi,
- Original Message
From: Chris Hostetter <[EMAIL PROTECTED]>
To: java-user@lucene.apache.org
Sent: Saturday, June 16, 2007 3:10:08 AM
Subject: Re: How to Use ParallelReader
: My question is: If I just want to update the small fields in one index
: and do not want to update the large
- Original Message
From: Lee Li Bin <[EMAIL PROTECTED]>
I would like to know how's the performance during indexing and searching of
results on a large index files would be like.
OG: It depends ;)
- on your hardware (fast disk? lots of RAM? multi-CPU? multi-core?)
- on the size of data
Regarding point #2, in case none of those work for you for some reason, you
could always try using this:
$ ll analyzers/src/java/org/apache/lucene/analysis/ngram/
total 48
-rw-rw-r-- 1 otis otis 4934 Mar 2 16:32 EdgeNGramTokenFilter.java
-rw-rw-r-- 1 otis otis 4617 Feb 21 15:33 EdgeNGramTokeni
Thanks for the good summary, Mark!
Otis
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simpy -- http://www.simpy.com/ - Tag - Search - Share
- Original Message
From: Mark Miller <[EMAIL PROTECTED]>
To: java-user@lucene.apache.org
Sent: Thursday, June 21, 2007 11:19:22
Hi Steve,
thanks for your reply a lot.its now compress upto 50% of the original
size.is there any other possiblity using this code compress upto 80%.
Steve Liles wrote:
>
> Compression aside you could index the "contents" as terms in separate
> fields instead of tokenized text, and disable
20 matches
Mail list logo