On Fri, Jan 4, 2013 at 6:27 PM, Erick Erickson wrote:
> BTW, if all you're interested in is the compiled code, you can always get
> the latest build from:
> http://wiki.apache.org/solr/NightlyBuilds(4x-SNAPSHOT). That code will
> be compiled from the link Shai pointed out
> except for any commits
Oh. My bad! Sorry. I misread your JSON.
BTW I see that you solve yourself your problem on StackOverFlow.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 4 janv. 2013 à 23:21, "C. Benson Manica" a écrit :
Do I have to do it that way, i.e. POST a separate settings payload t
Hello Mike.
Thanks for your reply.
It's not an important issue.
I'll waiting for next release version including this patch.
Thanks.
2013/1/4 Michael McCandless
> The problem is that the TermVectorsFormat for the default codec
> (Lucene40TermVectorsFormat) does not store this statistic
> per-do
Do I have to do it that way, i.e. POST a separate settings payload to that
url? You can see that I attempted - per various bits of documentation - to
do it in the mappings section above.
On Fri, Jan 4, 2013 at 2:01 PM, David Pilato wrote:
> Did you define mappings for your docs and fields to u
On Sat, Jan 5, 2013 at 4:06 AM, Klaus Nesbigall wrote:
> The actual behavior doesn't work either.
> The english word families will not be found in case the user types the query
> familie*
> So why solve the problem by postulate one oppinion as right and another as
> wrong?
> A simple flag which
Did you define mappings for your docs and fields to use that analyzer?
See:
http://www.elasticsearch.org/guide/reference/api/admin-indices-put-mapping.html
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 4 janv. 2013 à 22:30, "C. Benson Manica" a écrit :
I have been Goog
I have been Googling for an hour with no success whatsoever about how to
configure Lucene (Elasticsearch, actually, but presumably the same deal) to
index edge ngrams for typeahead. I don't really know how filters,
analyzers, and tokenizers work together - documentation isn't helpful on
that count
I've encountered the same problem and tried to use your workaround. But
overwriting the parser hasn't done the job.
I do not understand why the stemming is done anyway.
Uwe wrote
> This is a well-known problem: Wildcards cannot be analyzed by the query
> parser, because the analysis would destr
The problem is that the TermVectorsFormat for the default codec
(Lucene40TermVectorsFormat) does not store this statistic
per-document, currently. We could in theory fix this ... maybe open
an issue / make a patch if it's important?
-1 return value is actually "valid": it means this statistic is
Thanks for bringing closure Alon.
In 3.0.3 each thread had a private terms dict cache, so that explains
the high per-thread RAM usage. This was fixed at some 3.x release ...
so we won't be fixing 3.0.3 at this point.
Mike McCandless
http://blog.mikemccandless.com
On Fri, Jan 4, 2013 at 8:29 AM
Robert you were absolutely right , we had a problem with our deployment
script and the Lucune Jars were not updated to 3.6.2 , they remained 3.0.3
...
after deploying the application again with the updated jars , the memory
leak is gone !
(although i do see a lot more gc activity on the YoungGen
BTW, if all you're interested in is the compiled code, you can always get
the latest build from:
http://wiki.apache.org/solr/NightlyBuilds(4x-SNAPSHOT). That code will
be compiled from the link Shai pointed out
except for any commits since the build...
FWIW,
Erick
On Wed, Jan 2, 2013 at 2:01 PM,
I have an indexer that already collapses field values into a Map of
(value, count) before indexing, and I would like to specify an increment
to frequency (docFreq?) when adding a field value to a Lucene Document.
Should I just add the same value multiple times?
-Mike
13 matches
Mail list logo