PM, Ian Lea wrote:
> Are you using StandardAnalyzer in 3.1+? You may want to use
> ClassicAnalyzer instead. I can't see where in the 3.5 javadocs it
> says that email addresses are recognized, but it does sound vaguely
> familiar.
>
>
> --
> Ian.
>
>
> On T
This is a pretty simple question to answer, but I have customers asking me
how this is suppose to work and I'm having trouble explaining it. I have
an app that indexes emails so there are plenty of email addresses in there.
Reading the StandardAnalyzer javadoc it says it "recognizes" email
addres
n't see how you can get leaked files when you do call close and
> not when you don't. Can you narrow it down to a simple standalone
> program?
>
>
> --
> Ian.
>
>
> On Mon, Jan 9, 2012 at 3:10 PM, Charlie Hubbard
> wrote:
> > Ian,
> >
> >
; oal.search.NRTManager and oal.search.SearcherManager, now part of the
> core, previously available via an LIA download. I'm not sure they
> work with multi readers but could certainly be mined for ideas.
>
>
> --
> Ian.
>
>
> On Sat, Jan 7, 2012 at 11:56 PM, Charlie Hubbard
one expects
> > in 3.1 absent some programming error on your
> > part, so it's hard to know what to say without
> > more information.
> >
> > 3.1 has other problems if you use spellcheck.collate,
> > you might want to upgrade if you use that feature
> > to
n as long as you can and commit if needed.
> Even optimize is somewhat overrated and should be used with care or
> not at all... (here is another writeup regarding optimize:
>
> http://www.searchworkings.org/blog/-/blogs/simon-says%3A-optimize-is-bad-for-you
> )
>
>
> hope
ks
Charlie
On Sat, Dec 31, 2011 at 1:01 AM, Charlie Hubbard
wrote:
> I have a program I recently converted from a pull scheme to a push scheme.
> So previously I was pulling down the documents I was indexing, and when I
> was done I'd close the IndexWriter at the end of each itera
You can always index into RAMDirectory for speed then synchronize those
changes to the disk by adding the RAMDirectory to a FSDirectory at some
point. Here is a simple example of how to do that:
public void save( RAMDirectory ram, File dir ) {
FSDirectory fs = FSDirectory.open( dir );
I have a program I recently converted from a pull scheme to a push scheme.
So previously I was pulling down the documents I was indexing, and when I
was done I'd close the IndexWriter at the end of each iteration. Now that
I've converted to a push scheme I'm sent the documents to index, and I
wri
Hi,
I have an existing index from v.2.2 that I populated with documents that had
Fields before NumericField was created. I'm upgrading my program to v3.0,
and I'd like to change my RangeQuery to use the NumericRangeQuery or
NumericRangeFilter. Here is how I stored my fields:
document.ad
Here was the prior API I was calling:
Hits hits = getSearcher().search( query, filter, sort );
The new API:
TopDocs hits = getSearcher().search( query, filter, startDoc +
length, sort );
So the question is what new API can I use that allows me to extract all
documents matching t
to a
file. I thought that's essentially what a Collector is, being an interface
that is called back whenever it encounters a Document that matches a query.
Any elaboration on that?
Charlie
On Fri, Sep 16, 2011 at 2:30 PM, Eddie Drapkin wrote:
> On 9/16/2011 11:30 AM, Charlie Hubbard wr
I'm trying to reimplement a feature I had under 2.x in 3.x. I have a
feature where a zip file for all of the documents returned by a search can
be exported. Now with the newer APIs you have to put an upper limit on the
search so it won't return more than X documents. I'd like to extract all of
t
Hi,
I posted some questions to stackoverflow regarding how to upgrade from 2.2.x
to 3.1.x. Hadn't gotten a response so I thought I'd try here. Would repost
the full question here, but it looks prettier over there:
http://stackoverflow.com/questions/7383428/questions-on-upgrading-lucene-from-2-2
14 matches
Mail list logo