maybe this code can be useful for you
http://issues.apache.org/bugzilla/show_bug.cgi?id=34629
nicolas
On 8/11/05, John Smith <[EMAIL PROTECTED]> wrote:
> Thank you . That does look like what I want
>
> JS
>
> Eyal <[EMAIL PROTECTED]> wrote:
> Run a search on "Lucene ParallelReader" in google
Thank you . That does look like what I want
JS
Eyal <[EMAIL PROTECTED]> wrote:
Run a search on "Lucene ParallelReader" in google - You'll find something
Doug Cutting wrote that I believe is what you're looking for.
Eyal
> -Original Message-
> From: John Smith [mailto:[EMAIL PROTECTED
On 8/11/05, Otis Gospodnetic <[EMAIL PROTECTED]> wrote:
>
> > So I've decided I'm going to simply have empty fields, and that
> > brought up several other questions.
> >
> > First, is there a limit on the number of fields per document?
>
> I don't think so.
>
> > Secondly why are fields in Docum
> So I've decided I'm going to simply have empty fields, and that
> brought up several other questions.
>
> First, is there a limit on the number of fields per document?
I don't think so.
> Secondly why are fields in Document implemented with a Vector instead
> of a HashSet or similar? Wouldn't
Run a search on "Lucene ParallelReader" in google - You'll find something
Doug Cutting wrote that I believe is what you're looking for.
Eyal
> -Original Message-
> From: John Smith [mailto:[EMAIL PROTECTED]
> Sent: Thursday, August 11, 2005 21:12 PM
> To: java-user@lucene.apache.org
>
So I've decided I'm going to simply have empty fields, and that
brought up several other questions.
First, is there a limit on the number of fields per document?
Secondly why are fields in Document implemented with a Vector instead
of a HashSet or similar? Wouldn't retrieval be faster without
ite
Hi all
This is a slightly long email. Pardon me.
As Lucene does not allow for updating an existing document in the index, the
only option is to delete and reindex the message.When you have too many
updates, this gets a little cumbersome. In our case, as such the actual content
of the do
Hi everybody,
I have several weeks fighting with Multisearcher and now I have a question:
When I have to call the method multisearcher.close()? before I have to
close all IndexSearchers that are included in multisearcher?
This question are produced because, I modify one index (my multisearcher
You can't really _force_ the JVM to perform the GC and clean up the
heap, although you can _suggest_ it via System.gc()
You can try playing with this:
private static long gc()
{
long freeMemBefore = Runtime.getRuntime().freeMemory();
System.out.println("Free Memory Before:
Please forgive my jumping on this thread, but I have a similar issue. I
have a server process on Linux that creates the java process (java
-Xms256m -Xmx512m -jar Suchmaschine.jar). The problem is that after the
processing is done, the memory is retained. Is there a collection
argument that would
> > Is -Xmx case sensitive? Should it be 1000m instead of 1000M? Not
> > sure.
> >
>
> I'am starting with:
> java -Xms256M -Xmx512M -jar Suchmaschine.jar
And if you look at the size of your JVM, does it really use all 512 MB?
If it does not, maybe you can try this:
java -Xms256m -Xmx512m -j
Otis Gospodnetic schrieb:
> Is -Xmx case sensitive? Should it be 1000m instead of 1000M? Not
> sure.
>
I'am starting with:
java -Xms256M -Xmx512M -jar Suchmaschine.jar
--
Die analytische Maschine (der Computer) kann nur das ausführen, was wir
zu programmieren imstande sind. (Ada Lovelace)
Thx stefan:)
On 8/7/05, Stefan Groschupf <[EMAIL PROTECTED]> wrote:
> Hi,
> I run in the same problem some weeks ago as well.
> You can find following in the java doc:
>
> "Note: this value is not stored directly with the document in the
> index. Documents returned from IndexReader.document(int)
13 matches
Mail list logo