Does it work for numeric fields too? I am working with 2.9.0 and the
following code gives extra values:
@Test
public void distinct() throws Exception {
RAMDirectory directory = new RAMDirectory();
IndexWriter writer = new IndexWriter(directory, new
WhitespaceAnalyzer(), tr
Hi, the following junit test fails on 3 out of the 6 searches:
@Test
public void indexXML() throws Exception {
Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_CURRENT);
RAMDirectory dir = new RAMDirectory();
IndexWriter writer = new IndexWriter(dir, analyz
Hi, I am jumping into the thread because I have got a similar issue.
My index is 30Gb large and contains 21M docs.
I was able to stay with 1Gb of RAM on the server for a while. Recently I
started to simulate parallel searches. Just 2 parallel searches would get
the server to crash with out of memo
Hi,
I am indexing log4j/logback/JUL logging events. my documents includes
regular fields (eg: logger, message, date, ...) and custom fields that
applications choose to use (eg: MDC).
I would like to do full text searches on those fields just as I do on
regular fields, I just need to know about th
Hi Otis,
this is 3Gb of heap (-Xmx). I am running on a multicore 32 bits machine and
I am concerned about the 4Gb limit. cpu is not a problem, however I am
wondering about memory requirements as I will be scaling up. I mostly use
term queries on multiple fields (about 30 fields); so no fuzzy or s
Hi,
is this a good idea/possible to continue writing events to an index while
optimizing it, in 2 differents threads, in the same process, using the same
writer?
thanks,
vince
--
View this message in context:
http://old.nabble.com/adding-documents-while-optimizing--tp26421269p26421269.html
Sen
Hi, I am using lucene 2.9.1 to index a continuous flow of events. My server
keeps an index writer open at all time and write events as groups of a few
hundred followed by a commit. While writing, users invoke my server to
perform searches. Once a day I optimize the index, while writes happens and
ders during optimize is fine, if you close the old reader
> each time. It will possibly tie up more transient disk usage than had
> you reopened at the end of optimize, but if you have plenty of disk
> space it shouldn't be a problem.
>
> Mike
>
> On Mon, Nov 23, 2009 at 3
using too
many wildcards, the search could take a long time. and rather than
restricting what they can do, I would rather let them cancel the search
gracefully. would that be something feasible?
Thanks,
vincent
Michael McCandless-2 wrote:
>
> On Tue, Nov 24, 2009 at 1:44 AM, vsevel wrote:
it needs to be followed by a commit or close, like any other write.
thanks for the help,
vincent
Michael McCandless-2 wrote:
>
> On Tue, Nov 24, 2009 at 9:08 AM, vsevel wrote:
>> Hi, just to make sure I understand correctly... After an optimize,
>> without
>> any reader
Hi, I have done some testing that I would like to share with you.
I am starting my tests with an unoptimized 40Mb index. I have 3 test cases:
1) open a writer, optimize, commit, close
2) open a writer, open a reader from the writer, optimize, commit, close
3) same as 2) except the reader is opene
> merge, etc.).
>
> Could you try closing your reader, then calling writer.commit() (which
> is a no-op, since you had already committed, but it may tickle the
> writer into attempting the deletions), and see if that frees up disk
> space w/o closing?
>
> Mike
>
> On Fr
12 matches
Mail list logo