;
memory mappings aren't being "released" after being deleted?
Justin
On Fri, May 9, 2025 at 7:03 AM Uwe Schindler wrote:
> Hi,
>
> Did the sharedArenaMaxPermits=64 help.
>
> Actually sorry for the answer, I did not recognize that you were talking
> about doc value
r--s 08:10 78838113
/usr/share/opensearch/data/nodes/0/indices/Ci3MyIbNTceUmC67d1IlwQ/119/index/_912j_m9_Lucene90_0.dvd
(deleted)
7ed8a275c000-7ed8a275f000 r--s 08:10 78830146
/usr/share/opensearch/data/nodes/0/indices/Ci3MyIbNTceUmC67d1IlwQ/126/index/_9buk_4e_Lucene90_0.dvd
(dele
question for the OS
community or the Lucene developer list.
Justin Borromeo
mechanisms
to explicitly prevent them from being cached? Is it even possible with
Java?
Thanks,
Justin Borromeo
wiki?
Thanks for any help!
Justin
-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org
he.org/lucene-java/LuceneFAQ#Does_Lucene_allow_searching_and_indexing_simultaneously.3F>.
> -Original Message-
> From: Justin [mailto:cry...@yahoo.com]
> Sent: Monday, October 04, 2010 2:03 PM
> To: java-user@lucene.apache.org
> Subject: Updating documents with fields that aren't stored
>
> Hi all,
&g
aders, however, do continue to find such terms after updateDocument
has been called. At best, this is confusing. Is this a defect?
Thanks,
Justin
-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For addit
nd never get merged through the normal
indexing process.
-Original Message-
From: Anshum [mailto:ansh...@gmail.com]
Sent: Tuesday, August 24, 2010 12:11 AM
To: java-user@lucene.apache.org
Subject: Re: Wanting batch update to avoid high disk usage
Hi Justin,
Lucene does not reclaim space, each up
ll
it once at the end, though, you are calling the optimize method in the end
anyways so should take care of itself. there shouldn't be any difference
(but degradation in performance) on adding a call to expungedeletes().
--
Anshum Gupta
http://ai-cafe.blogspot.com
On Tue, Aug 24, 2010 at 4:38
();
for (int i=0; ihttp://wiki.apache.org/lucene-java/LuceneFAQ#If_I_decide_not_to_optimize_the_index.2C_when_will_the_deleted_documents_actually_get_deleted.3F
Thanks for your help,
Justin
-
To unsubscribe, e-mail: java
> make both a stemmed field and an unstemmed field
While this approach is easy and would work, it means increasing the size of the
index and reindexing every document. However, the information is already
available in the existing field and runtime analysis is certainly faster than
more disk I/O
]" may appear any number of
times where PREFIX comes from the set { A, B, C, D, E, ... }.
This complexity is really a tangent of my question in order to avoid poor
performance from WildcardQuery.
- Original Message
From: Steven A Rowe
To: "java-user@lucene.apache.org
time may be crazy, I don't know.
- Original Message
From: Steven A Rowe
To: "java-user@lucene.apache.org"
Sent: Fri, July 30, 2010 12:04:58 PM
Subject: RE: InverseWildcardQuery
Hi Justin,
> Unfortunately the suffix requires a wildcard as well in our case. There
>
Sent: Fri, July 30, 2010 11:14:17 AM
Subject: RE: InverseWildcardQuery
Hi Justin,
> [...] "*:* AND -myfield:foo*".
>
> If my document contains "myfield:foobar" and "myfield:dog", the document
> would be thrown out because of the first field. I wan
> indexing your terms in reverse
Unfortunately the suffix requires a wildcard as well in our case. There are a
limited number of prefixes though (10ish), so perhaps we could combine them all
into one query. We'd still need some sort of InverseWildcardQuery
implementation.
> use another analyze
s that is a way
of inverting the second query.
--
Ian.
On Fri, Jul 30, 2010 at 3:29 PM, Justin wrote:
> Any hints on making something like an InverseWildcardQuery?
>
> We're trying to find all documents that have at least one field that doesn
Any hints on making something like an InverseWildcardQuery?
We're trying to find all documents that have at least one field that doesn't
match the wildcard query.
Or is there a way to inverse any particular query?
-
To
Nevermind, it is blocking...
public void optimize()
throws CorruptIndexException, IOException {
optimize(true);
}
- Original Message
From: Justin
To: java-user@lucene.apache.org
Sent: Thu, June 24, 2010 3:56:17 PM
Subject: Re: Problems with homebrew ParallelWriter
So is
From: Justin
To: java-user@lucene.apache.org
Sent: Thu, June 24, 2010 12:12:57 PM
Subject: Re: Problems with homebrew ParallelWriter
Hi Shai,
> Is it synchronized
public synchronized void addDocument(Document document)
throws CorruptIndexException, IOException {
Document docume
Hi Mike,
We did use IndexWriter::setInfoStream. Apparently there is a lot to sift
through.
I'll let you know if we make any discoveries useful for others.
Thanks!
Justin
- Original Message
From: Michael McCandless
To: java-user@lucene.apache.org
Sent: Thu, June 24, 2010 4:
eader;
}
As you can see above, my colleague optimizes the indexes to account for merges
that have occurred out-of-sync.
> if you've made progress, upload another patch?
If we make a revelation with regards to ParallelWriter, I'll be happy to share.
Thanks for giving us s
ter breakpoints around Lucene source (we've got
tr...@926791 to take advantage of latest NRT readers).
Does anyone have ideas on how the indexes would get out of sync? Process
close, committing, optimizing,... they all should work o
ache/lucene/analysis/StopFilter.html>,
using a list of English stop words.
So your case issue is entirely explained. You have no reason at all to
expect
stemming to work unless you put a stemmer in the chain....
HTH
Erick
On Thu, Apr 29, 2010 at 5:10 PM, Justin wrote:
> I'm u
like WhitespaceAnalyzer that doesn't stem or change case. Try
a different analyzer, SimpleAnalyzer comes to mind
HTH
Erick
On Thu, Apr 29, 2010 at 4:21 PM, Justin wrote:
> I'm trying to use Highlighter with QueryScorer after reading:
>
> https://issues.apache.org/jira/browse/LUCENE
er(query, field));
highlighter.setMaxDocCharsToAnalyze(50);
TokenStream ts = htmlStripAnalyzer.tokenStream(field, new
StringReader(content));
ts = new CachingTokenFilter(ts);
System.out.println(highlighter.getBestFragment(ts, content));
Thanks for any feedback,
Justin
---
value of Analyzer.tokenStream() with the filter. Look into Highlighters source,
how this is done there.
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Justin [mailto:cry...@yahoo.com]
> Sent
eader, HTMLStripCharFilter
To reset this token stream you have to wrap it with a CachingTokenFilter.
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Justin [mailto:cry...@yahoo.com]
> Sent: Tuesday, April
, StringReader
}
- Original Message
From: Robert Muir
To: java-user@lucene.apache.org
Sent: Sat, April 24, 2010 9:03:02 AM
Subject: Re: HTMLStripReader, HTMLStripCharFilter
On Fri, Apr 23, 2010 at 4:48 PM, Justin wrote:
> Just out of curiousity, why does LUCENE-1377 have a mi
not sure I follow the
discussion in JIRA as Solr developers can choose whether or not to use any
class added to Lucene at any time after its addition.
Thanks for any feedback,
Justin
-
To unsubscribe, e-mail: java-use
java-user@lucene.apache.org
Sent: Thu, April 8, 2010 4:53:08 PM
Subject: Re: ClosedChannelException from IndexWriter.getReader()
Argh! one more running into this issue.
It still bugs me that NIOFSDirectory struggles so badly if interrupt is used.
simon
On Thu, Apr 8, 2010 at 11:19 PM, Justin wrote:
&g
in a separate thread is common... but why does
Thread.interrupt() come into play in your app for warming?
Mike
On Thu, Apr 8, 2010 at 4:38 PM, Justin wrote:
> In fact, we are using Thread.interrupt() to warm up a searcher in a separate
> thread (not really that uncommon, is it?). We may
/browse/LUCENE-2239
Try temporarily using a Directory impl other than NIOFSDirectory and
see if the problem still happens?
Mike
On Thu, Apr 8, 2010 at 2:14 PM, Justin wrote:
> I'm getting a ClosedChannelException from IndexWriter.getReader(). I don't
> think the writer has been c
riter.java:423)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:387)
Thanks,
Justin
-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: j
Should these be explicitly initialized to false?
private boolean fieldSortDoTrackScores;
private boolean fieldSortDoMaxScore;
-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands
t the following
references, although there is mention that reopen() will forward back to
getReader().
http://lucene.apache.org/java/3_0_1/api/core/org/apache/lucene/index/IndexWriter.html#getReader%28%29
http://wiki.apache.org/lucene-java/NearRealtimeSearch
- Original Message
From: J
50,000
documents are added (still under our increased limit). Hopefully our problems
didn't extend past leaking file descriptors by omitting the explicit close.
- Original Message
From: Justin
To: java-user@lucene.apache.org
Sent: Thu, March 4, 2010 6:29:25 PM
Subject: Re:
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
> > -Original Message-
> > From: Justin [mailto:cry...@yahoo.com]
> > Sent: Friday, March 05, 2010 12:52 AM
> > To: java-user@lucene.apache.org
> > Subject: File descriptor leak in ParallelR
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Justin [mailto:cry...@yahoo.com]
> Sent: Friday, March 05, 2010 1:17 AM
> To: java-user@lucene.apache.org
> Subject: Re: File descriptor leak in ParallelReader.reopen()
>
> Has this chan
ak in ParallelReader.reopen()
On 03/04/2010 06:52 PM, Justin wrote:
> Hi Mike and others,
>
> I have a test case for you (attached) that exhibits a file descriptor leak in
> ParallelReader.reopen(). I listed the OS, JDK, and snapshot of Lucene that
> I'm using in the source code.
>
need help reproducing the problem or can help identify it.
Thanks!
Justin
import java.io.File;
import java.io.IOException;
import java.io.Reader;
import java.util.LinkedList;
import java.util.List;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.LowerCaseFilt
ere
is no clone method and no get methods for the necessary constructor parameters
(e.g. number of documents to collect).
Does anyone have a suggestion for "2-pass scoring" of top docs?
Thanks,
Justin
-
To unsub
r.close();
> + }
> +
>public void testEquals() {
> Query q1 = new MatchAllDocsQuery();
> Query q2 = new MatchAllDocsQuery();
>
> Mike
>
> On Fri, Feb 26, 2010 at 4:54 PM, Justin wrote:
> > Is this a bug in Lucene Java as of tr...@915399?
>
rer(this);
int doc;
while ((doc = nextDoc()) != NO_MORE_DOCS) { // doc = 0 (infinite)
collector.collect(doc);
}
}
Thanks for any feedback,
Justin
-
To unsubscribe, e-mail: java-user-
10:56 PM
Subject: Re: Lucene debug logging?
On Donnerstag, 4. September 2008, Justin Grunau wrote:
> Is there a way to turn on debug logging / trace logging for Lucene?
You can use IndexWriter's setInfoStream(). Besides that, Lucene doesn't do
any logging AFAIK. Are you experienci
ne else accesses
IndexReader and modifies index? See reader.maxDocs() to be confident.
On Fri, Sep 5, 2008 at 12:19 AM, Justin Grunau <[EMAIL PROTECTED]> wrote:
> We have some code that uses lucene which has been working perfectly well
> for several months.
>
> Recently, a
so far for several months
because we've never had an index this large.
We're using lucene verison 2.2.0.
Thanks!
Justin Grunau
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
Is there a way to turn on debug logging / trace logging for Lucene?
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
47 matches
Mail list logo