Hi Mikhail,
sorry for not being clear, I'll try again.
For my understanding the solr scale function, once applied to a field,
needs min and max for that field.
Those min and max values by default are calculated by all the existing
documents, I don't know exactly how this is implemented internally
How do I create a concatenated field via schema.xml?
I am using Solr, version 8.2
In my schema, fields ending in "_s" are of string type, fields ending in
"_t" are of text type, and fields ending in "_txt" are multivalue text type.
I need to create a field that concatenates the field
On 6/1/2022 3:27 AM, Yirmiyahu Fischer wrote:
I tried
BrandName_s
ManufacturerNo_s
BrandMfgConcat_t
BrandMfgConcat_t
However, after indexing, the field BrandMfgConcat_t do
>From looking at
https://github.com/apache/lucene/blob/main/lucene/queries/src/java/org/apache/lucene/queries/function/valuesource/ScaleFloatFunction.java#L70
I conclude that min,max are obtained from all docs in the index.
But if you specify query() as an argument for scale() it takes only
matchin
My application relies on local parameter substitution pretty heavily due to
escaping issues and being able to re-use clauses.
Is there any way to use param substitution in a /stream expression?
(This doesn't work):
POST /stream {
'expr': 'search(collection1,q=$test,fl="doc_id",sort="doc_id
asc
Clemens,
On 5/30/22 02:02, Clemens WYSS (Helbling Technik) wrote:
Given a connection to Solr ( e.g. adminSolrConnection )
CoreAdminRequest.Create createCoreRequest = new CoreAdminRequest.Create();
createCoreRequest.setCoreName( coreName );
createCoreRequest.process( adminSolrConnection );
What
Clemens,
On 6/1/22 13:41, Christopher Schultz wrote:
Clemens,
On 5/30/22 02:02, Clemens WYSS (Helbling Technik) wrote:
Given a connection to Solr ( e.g. adminSolrConnection )
CoreAdminRequest.Create createCoreRequest = new
CoreAdminRequest.Create();
createCoreRequest.setCoreName( coreName );
Hi,
You'll need to set a Java system property at startup to run macro expansion
in Streaming Expressions.
See the commit below which references the jira with the security concerns:
https://github.com/apache/solr/commit/9edc557f4526ffbbf35daea06972eb2c595e692b
The parameter setting is as follows
There is no configuration for this but the Stream Expression export/shuffle
function does this automatically.
https://solr.apache.org/guide/8_11/stream-source-reference.html#shuffle
The "export" function name is also mapped to the "shuffle" function so you
can use either name.
This function is a
On 6/1/2022 11:41 AM, Christopher Schultz wrote:
How can I provide the schema for the core once it's been created? Can
I use the API for that, or do I have to resort to pushing the config
file directly similar to these kindx of curl commands:
curl -d "{ ... config }" \
${SCHEME}://localhost
On 2019/01/04 18:27:42 Gus Heck wrote:
> Hi Bob,
>
> Wrt licensing keep in mind that multi licensed software allows you to
> choose which license you are using the software under. Also there's some
> good detail on the Apache policy here:
>
>
https://www.apache.org/legal/resolved.html#what-can-we-n
I just did a quick check on Solr 9 and expand / collapse was working. Here
is the output:
https://gist.github.com/joel-bernstein/6f7f3ee12d5375630f3311c5dbd693ee
Is it possible that the expand component isn't registered in your
deployment? The expand component is a default component but have you
> Is it possible that the expand component isn't registered in your
> deployment? The expand component is a default component but have you
> overridden the defaults?
Yes, that’s exactly what happened. Turns out that I had pulled out the unused
handlers.
Thanks,
Andy
Hello,
We use Apache Solr 8.8 and are trying to find out ways to reduce recall (and
improve precision) for our products. While looking into various options, we
came across the “split on whitspace” (sow) parameter, the value for which
changes when multi-word synonyms and stopwords come into pict
Shawn,
On 6/1/22 15:18, Shawn Heisey wrote:
On 6/1/2022 11:41 AM, Christopher Schultz wrote:
How can I provide the schema for the core once it's been created? Can
I use the API for that, or do I have to resort to pushing the config
file directly similar to these kindx of curl commands:
curl
I don't think so. I'd rather bark on distribute search reduce phase, if you
use one. I don't know.
On Wed, Jun 1, 2022 at 6:58 AM Poorna Murali wrote:
> Thanks Mikhail! Regarding the id field, is it possible that while doing
> faceting or sorting, the id fields will be loaded in field cache by
Shawn,
On 6/1/22 16:34, Christopher Schultz wrote:
Shawn,
On 6/1/22 15:18, Shawn Heisey wrote:
On 6/1/2022 11:41 AM, Christopher Schultz wrote:
How can I provide the schema for the core once it's been created? Can
I use the API for that, or do I have to resort to pushing the config
file dire
Ok. what is you try something like
q=*:*&fq=popularity:(1 OR
7)&rows=100&fl=price,scale(query($scopeq),0,1)&scopeq={!filters
param=$fq}{!func}price
It passes price field values to scale function, limiting the scope of
min,max calculation by fq.
On Wed, Jun 1, 2022 at 4:11 PM Mikhail Khludnev wrot
All,
Since Solr / Lucene can't define arbitrary fields in documents, I wonder
what the recommended technique is for storing structured-information in
a document?
I'd like to store information about an entity and is specific to related
entities (not stored in the index). This isn't my actual
We implemented something like location_denver_role for matching tutors to
subjects. There were a few thousand subjects and three kinds of scores, so each
tutor record had about 20,000 fields. Ranking fetched the three fields for that
subject ID to do the ranking. It wasn’t a big index, under 200
On 6/1/2022 3:34 PM, Christopher Schultz wrote:
So I tried this with configSet=_default and I /did/ get a core
created. I didn't get the same thing I got from the CLI:
This is what I get from "solr create -c test_core":
Using bin/solr to create a core does it in multiple steps. It creates
t
On 6/1/2022 6:31 PM, Shawn Heisey wrote:
The end result is the same ... except in the second case, it
references the configset by name, which will be in the created
core.properties file. If you were to change the config in the
configset directory and then reload each core, test_core would no
Thanks!!!
I found the stream function earlier today and was able to get my large data
export process refactored from using the /export endpoint in a non-cloud
setup to using /stream in our new distributed setup.
On Wed, Jun 1, 2022, 3:04 PM Joel Bernstein wrote:
> There is no configuration for
Joel, I owe you a beer!
On Wed, Jun 1, 2022, 2:55 PM Joel Bernstein wrote:
> Hi,
>
> You'll need to set a Java system property at startup to run macro expansion
> in Streaming Expressions.
>
> See the commit below which references the jira with the security concerns:
>
>
> https://github.com/apa
Thank you for your reply, we found the root cause.
It's our own fault.
At 2022-06-01 00:36:57, "Michael Gibney" wrote:
>`json.facet` covers a lot of ground and can do a lot of different
>things under the hood. Would you be able to share more specific
>information about the kinds of `
Hello everyone.
We want to Implement a statistical analysis requirement.
There are two ways to meet my requirement after viewing the Solr Ref Guide(8.11)
Schema is defined as follows:
timestamp, indexd=true, docValues=true
status, indexed=true, docValues=true
user_count, indexed=true, do
26 matches
Mail list logo