On Thu, Jan 12, 2017 at 1:02 PM, Kumaran Ramasubramanian
wrote:
> I always use filter when i need to add more than 1024 ( for no scoring
> cases ). If filter is removed in lucene 6, what will happen to
> maxbooleanclauses limit? Am i missing anything?
That sounds like a BooleanQuery with FILTER
I always use filter when i need to add more than 1024 ( for no scoring
cases ). If filter is removed in lucene 6, what will happen to
maxbooleanclauses limit? Am i missing anything?
-
Kumaran R
On Jan 12, 2017 5:01 AM, "Trejkaz" wrote:
On Thu, Jan 21, 2016 at 4:25 AM, Adrien Grand wrote:
>
Hi
I want to know the purpose of having final in analyzers.
For eg: classicanalyzer. It will be easy to add asciifolding filter over
classicanalyzer.
-
Kumaran R
On Jan 12, 2017 5:41 AM, "Michael McCandless"
wrote:
I don't think it's about efficiency but rather about not exposing
possibly tr
Hi.
I don't know why, but we have some kind of esoteric logic in our own
code to simplify a circle on the Earth to a bounding box, clearly
something to do with computing geo queries.
double lonMin = -180.0, lonMax = 180.0;
if (!closeToPole(latMin, latMax)) {
double D = SloppyMath.e
I don't think it's about efficiency but rather about not exposing
possibly trappy APIs / usage ...
Do you have a particular class/method that you'd want to remove final from?
Mike McCandless
http://blog.mikemccandless.com
On Wed, Jan 11, 2017 at 4:15 PM, Michael Wilkowski wrote:
> Hi,
> I som
On Thu, Jan 21, 2016 at 4:25 AM, Adrien Grand wrote:
> Uwe, maybe we could promote ConstantScoreWeight to an experimental API and
> document how to build simple queries based on it?
In the future now, looking at Lucene 6.3 Javadocs, where Filter is now
gone, and it seems that ConstantScoreWeight
Hi,
I sometimes wonder what is the purpose of so heavy "final" methods and
classes usage in Lucene. It makes it my life much harder to override
standard classes with some custom implementation.
What comes first to my mind is runtime efficiency (compiler "knows" that
this class/method will not be o
Hi All,
Background:
I have a mainframe file that I want to upload and the data is pipe delimited.
Some of the records however have a few fields less that others within the same
file and when I try to import the file, Solr has an issue with the amount of
columns vs the amount of values, which is
Chris:
I _think_ I just added you to the list of people who can edit that
page, if you can't ping us back and I'll try to find the right page...
Erick
On Wed, Jan 11, 2017 at 9:07 AM, Chris Lewis wrote:
> Good afternoon,
>
> Our Company Connexica has been using Lucene as the core indexing and s
Good afternoon,
Our Company Connexica has been using Lucene as the core indexing and storage
technology for our search based business analytics tool CXAIR. We have been
developing this tool for a number of years and would welcome the opportunity to
reference our product on your web site.
Lucen
Hi Jaspreet,
Not sure whether this helps to answer your question as I didn't try to run
the code:
>From official guide:
> Within Lucene, each numeric value is indexed as a *trie* structure, where
> each term is logically assigned to larger and larger pre-defined brackets
> (which are simply lowe
Hi,
this is indeed related to this.
The problem is a missing "schema" in Lucene. If you index values using several
different field types (like TextField vs. IntField/Float/Double...) this
information how they were indexed is completely unknown to the query parser.
The default query parser is u
Thx for the fast reply,
This seems the same issue yes! So I will wait for the 6.4.0 to be out.
Thanks a lot,
Jerome MICHEL
-Original Message-
From: Michael McCandless [mailto:luc...@mikemccandless.com]
Sent: mercredi 11 janvier 2017 11:32
To: Lucene Users ; Michel, Jerome
Subject: EXT
Hi,
I suspect you are hitting an issue fixed about a month ago:
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9aa5b73
But the fix has not yet been released; it will be in Lucene 6.4.0
which should be out in a week or two.
The NPE happens when some segments are missing the range fie
Hi
I come to you again for a problem around the usage of IntRangeField. At
indexing time, I use the IntRangeField(String name, final int[] min, final
int[] max) constructor with 1 dimension for both min and max, and the name for
my field in Index. When inspecting with Luke, all semms OK and the
15 matches
Mail list logo