Here is the document:
MimeMessage mime = new MimeMessage(null, new
FileInputStream(item
.getMailFile()));
document.add(new Field(FIELD_MAILID,
item.getMailId().toString(),
Field.Store.YES, Field.Index.UN_TOKENIZED));
docum
Roger,
Why can;t you have one document for every combination of dimension, level
? Add cube name , id and description too as a field to all documents , all
it would be reduntant information, but you can live with it i suppose?
I think you are developing an application to search a cube ?
what do
I'm pretty sure that UN_TOKENIZED really bypasses analysis
entirely. so yes, it's a little confusing that you can specify an
analyzer but then pass a flag that says, in effect, "ignore the
analyzer I *said* I wanted to use".
So, in your example, you *are* running your query through
SimpleAnalyzer,
As far as I know, the secondary sort really kicks in only when there is a tie
caused by the primary sort, so the secondary sort should not be affecting the
primary sort.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Tobias Lohr <[EMAIL PROT
Hi,
- Original Message
From: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
To: java-user@lucene.apache.org
Sent: Sunday, January 13, 2008 12:41:18 PM
Subject: Max size of index (FSDirectory )
Hi,
is there any maximum size for an index?
OG: There is: doc IDs are currently integers, so max in
See IndexWriter.AddIndexes.
See org.apache.lucene.misc.IndexMergeTool
Erick
On Jan 13, 2008 12:10 PM, <[EMAIL PROTECTED]> wrote:
> Hi,
>
> are there any ready to use tools out there which I can use for merging and
> optimzing?
>
> I have seen that Luke can optimize, but not merge?
>
> Or do I ha
Hi,
is there any maximum size for an index?
Are there any recommendations for a useful max size?
I want to index in parallel. So I have to create multiple indexes.
Shall I merge them together or shall I let them as they are using
(Parallel)MultiSearcher?
Thank you.
---
On Jan 13, 2008, at 12:08 PM, <[EMAIL PROTECTED]> wrote:
I have some doubts about Analyzer usage. I read that one shall
always use
the same analyzer for searching and indexing.
Why? How does the Analyzer effect the search process? What is
analyzed here
again?
As you surmised, it is becaus
> I think that method was renamed somewhere along the way to
> setMaxBufferedDocs.
>
> However, in 2.3 (to be released in a few weeks), it's better to use
> setRAMBufferSizeMB instead.
>
> For more ideas on speeding up indexing, look here:
>
> http://wiki.apache.org/lucene-java/ImproveI
Hi,
are there any ready to use tools out there which I can use for merging and
optimzing?
I have seen that Luke can optimize, but not merge?
Or do I have to write my own utility?
Thank you
-
To unsubscribe, e-mail: [EMAIL PRO
I think that method was renamed somewhere along the way to
setMaxBufferedDocs.
However, in 2.3 (to be released in a few weeks), it's better to use
setRAMBufferSizeMB instead.
For more ideas on speeding up indexing, look here:
http://wiki.apache.org/lucene-java/ImproveIndexingSpeed
M
Hi,
I have some doubts about Analyzer usage. I read that one shall always use
the same analyzer for searching and indexing.
Why? How does the Analyzer effect the search process? What is analyzed here
again?
I have tried this out. I used a SimpleAnalyzer for indexing with
Field.Store.YES and Field
Hi,
http://wiki.apache.org/lucene-java/PainlessIndexing says that I shall use
setMinMergeDocs.
But I cannot find this method in lucene 2.2.
What is wrong here?
Thank you.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additi
OK, thank you! I will try this out.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
14 matches
Mail list logo