Hi All,
I have a multivalued field in the index. Can i use FieldCache for caching
that field.
Thanks,
Vipin
Timo - correct and correct.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Timo Nentwig <[EMAIL PROTECTED]>
To: java-user@lucene.apache.org
Sent: Tuesday, April 15, 2008 3:22:15 AM
Subject: Re: Sorting consumes hundreds of MBytes RAM
What do y
Hi Glenn,
I am not too clear about it, but isn't there a limit to the memory
consumption specified for the JVM? The limit being 1.3Gigs of resident
and 2 Gigs of memory in all? You just mentioned the Memory consumption:
-Xms4000m -Xmx6000m.
Could someone please help me with the same.
--
Anshum
O
: But then the FieldCache is just starting to feel alot like column-stride
: fields
: (LUCENE-1231).
that's what i've been thinking ... my goal with LUCENE-831 was to make it
easier to manage FieldCache and hopefully the norms[] as well particularly
in the case of reopen ... but with column-str
Doc IDs are NOT permanent. If you don't change your index at all
(delete especially, but sometimes adding/optimizing can chage IDs)
then you can re-use them. Otherwise not.
On Thu, Apr 17, 2008 at 1:45 PM, Shailendra Mudgal <
[EMAIL PROTECTED]> wrote:
> Hi All,
>
> I have a small confusion regar
Max Metral skrev:
>
Lululemon Athletica
I'd like any of these search terms to work for this:
Lulu lemon
Lu Lu Lemon
Lululemon
What strategy would be optimal for this kind of thing (of course keeping
How large is your corpus? I suggest you look at NGramTokenizer.
karl
--
Ahh woops sorry I didn't look at the latest patch on LUCENE-831 just
yet. Thanks! That's great.
Mike
Mark Miller wrote:
Right...that is what the latest patch I put up does (Hoss basically
stubbed it all out to be ready for this).
Each SegmentReader has its own cache. Each MultiReader can
Right...that is what the latest patch I put up does (Hoss basically
stubbed it all out to be ready for this).
Each SegmentReader has its own cache. Each MultiReader can have its own
cache as well (in the case that you want a primitive array), but if you
can take an ObjectArray object instead, the
Mark Miller wrote:
I think your 2 readers question is interesting and I will certainly
think about it. Right now though, each IndexReader instance holds
it own
cache. I'll have to dig back into the code and see about possibly
keying
on the directory or something?
I think, with how IndexR
Yeah, yeah, you are def right...if you have field caches larger than
your RAM, you can def spill off to HD. I just wonder if your going to
get performance that is acceptable if you are actually using all of
those fieldcaches and have to go to disk a lot. It would be awesome to
know how that works t
In our app, we search for businesses. So here's an example:
Lululemon Athletica
I'd like any of these search terms to work for this:
Lulu lemon
Lu Lu Lemon
Lululemon
What strategy would be optimal for this kind of thing (of course keeping
in mind negative matches are also bad)?
Hi All,
I have a small confusion regarding the document ids which we collect using
HitCollector.collect() method. Here is the description of the confusion :
First i created a FieldCache of type > using a
query which collects all the articles which are only a month old. I am
storing them into a ma
The obstacle I'm seeing is that I have a lot of fields which use sorting.
Sooner or later this will give an OutOfMem-error since the field-cache grows
too large. Am i correct in assuming that implementing for instance a EHCache
with flush-to-disk would solve this issue? (With a tradeoff for perfo
It would be great if you did. Please reply in LUCENE-1265.
Jake Mannix skrev:
We started doing the same thing (pooling 1 searcher per core) at my
work when profiling showed a lot of time hitting synchonized blocks
deep inside the SegmentTermReader (? Might be messing the class up)
under high lo
Great Mike!!!
I found a old version of it in the lib dir of tomcat (not the of the actual
webapp)
No it's working!
Thanks
-Ursprüngliche Nachricht-
Von: Michael McCandless [mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag, 17. April 2008 16:05
An: java-user@lucene.apache.org
Betref
Hello all,
De email adress for the group owner of the LinkedIn Lucene Interest Group
doesn't seem to work. Is this group still alive ?
Kind regards,
Wilfred Beijer
===
The information contained in this email is confidential and privileged. I
It seems likely you are using an older version of Lucene to access an
index created by a newer version of Lucene?
Mike
Hoelzl, Thomas wrote:
Hi all!
I have some problems running my lucene application on linux (suse).
lucene can't find segments file. is the errormessage I get in the
browse
It does not specifically incorporate caching to disk, but what it does
do is easily allow you to provide a new Cache implementation. The
default implementation is just a simple in memory Map, but its trivial
to provide a new implementation using something like EHCache to back the
Cache implementati
Hi all!
I have some problems running my lucene application on linux (suse).
lucene can't find segments file. is the errormessage I get in the
browser.
I don't understand why it is trying to find a "segments" file. My
index-dir doesn't contain that particular file.
It contains the following
I've seen some recent activity on LUCENE-831 "Complete overhaul of FieldCache
API" and read that it must be able to cleanly patch to trunk (haven't tried
yet).
What I'd like to know from people involved is if this patch incorporates
offloading of fieldcache to disk, or if this hasn't yet been ta
Op Thursday 17 April 2008 06:37:18 schreef Michael Stoppelman:
> Actually, I screwed up the timing info. I wasn't including the time
> for the QueryWrapperFilter#bits(IndexReader) call. Sadly,
> it actually takes longer than the original query that had both terms
> included. Bummer. I had really co
21 matches
Mail list logo