As of Solr 6.6, payload support has been added to Solr, see:
SOLR-1485. Before that, it was much more difficult, see:
https://lucidworks.com/2014/06/13/end-to-end-payload-example-in-solr/
Best,
Erick
On Thu, Feb 8, 2018 at 8:36 AM, Ahmet Arslan wrote:
>
>
> Hi Roy,
>
>
> In order to activate pay
Hi Roy,
In order to activate payloads during scoring, you need to do two separate
things at the same time:
* use a payload aware query type: org.apache.lucene.queries.payloads.*
* use payload aware similarity
Here is an old post that might inspire you :
https://lucidworks.com/2009/08/05/get
Thanks for your replies. But still, I am not sure about the way to do the
thing. Can you please provide me with an example code snippet or, link to
some page where I can find one?
Thanks..
On Tue, Jan 16, 2018 at 3:28 PM, Dwaipayan Roy
wrote:
> I want to make a scoring function that will score
If you are working with payloads, you will also want to have a look at
PayloadScoreQuery.
Le mar. 16 janv. 2018 à 12:26, Michael Sokolov a
écrit :
> Have a look at Expressions class. It compiles JavaScript that can reference
> other values and can be used for ranking.
>
> On Jan 16, 2018 4:58 AM
Have a look at Expressions class. It compiles JavaScript that can reference
other values and can be used for ranking.
On Jan 16, 2018 4:58 AM, "Dwaipayan Roy" wrote:
> I want to make a scoring function that will score the documents by the
> following function:
> given Q = {q1, q2, ... }
> score
I want to make a scoring function that will score the documents by the
following function:
given Q = {q1, q2, ... }
score(D,Q) =
for all qi:
SUM of {
LOG { weight_1(qi) + weight_2(qi) + weight_3(qi) }
}
I have stored weight_1, weight_2 and weight_3 for all term of all docu
Hello,
I am trying to implement my own custom similarity. My question is pretty
simple, i know how to override the Similarity class and also to
normalise the preexisting functions since they do not serve my purpose.
How can i add an extra factor to the scoring formula and also how can i
pass
Hello,
I am really new to Lucene, last week through this list i was really
successfull into finding a solution to my problem.
I have a new question now, i am trying to implement a new similarity
class that uses the Jaccard coefficient, i have been reading the
javadocs and a lot of other webpage
On Sat, Oct 8, 2011 at 3:37 AM, Joel Halbert wrote:
> Hi,
>
> Does anyone have a modified scoring (Similarity) function they would
> care to share?
>
> I'm searching web page documents and find the default Similarity seems
> to assign too much weight to documents with frequent occurrence of a
> si
That's what phaseQuery does.
Try phaseQuery to match the overlap, i think
On Sat, Oct 8, 2011 at 3:37 PM, Joel Halbert wrote:
> Hi,
>
> Does anyone have a modified scoring (Similarity) function they would
> care to share?
>
> I'm searching web page documents and find the default Similarity seems
Hi,
Does anyone have a modified scoring (Similarity) function they would
care to share?
I'm searching web page documents and find the default Similarity seems
to assign too much weight to documents with frequent occurrence of a
single term from the query and not enough weight to documents that
co
Dear Lucene group,
I wrote my own Scorer by extending Similarity. The scorer works quite
well, but I would like to ignore the fieldnorm value. Is this somehow
possible during search time? Or do I have to add a field indexed with
no_norm?
Best,
Philippe
-
Oops. I do indeed have omitNorms turned on. I will re-read the
documentation on it and look at turning it off.
Sorry for the bother. :/
On 5/17/07, Chris Hostetter <[EMAIL PROTECTED]> wrote:
: Terminator 2
: Terminator 2: Judgment Day
:
: And I score them against the query +title:(Terminator
: Terminator 2
: Terminator 2: Judgment Day
:
: And I score them against the query +title:(Terminator 2)
: Would there be some method or combination of methods in Similarity
: that I could easily override to allow me to penalize the second item
: because it had "unused terms"?
that's what the De
If I have two items in an index:
Terminator 2
Terminator 2: Judgment Day
And I score them against the query +title:(Terminator 2)
they come up with the same score (which makes sense, it just isn't
quite what I want)
Would there be some method or combination of methods in Similarity
that I could
es;
>>> }
>>>
>>> public synchronized void norms(String field, byte[] result, int
>>> offset) {
>>> System.out.println("writing fake norms...");
>>> System.arraycopy(ones, 0, result, offset, maxDoc());
&g
ECTED]>
> To: java-user@lucene.apache.org
> Sent: Thursday, January 18, 2007 5:36:21 PM
> Subject: Re: custom similarity based on tf but greater than 1.0
>
> I just did the same thing. If you search the list you'll find the thread
> where Hoss gave me the info you n
: java-user@lucene.apache.org
Sent: Thursday, January 18, 2007 5:36:21 PM
Subject: Re: custom similarity based on tf but greater than 1.0
I just did the same thing. If you search the list you'll find the thread
where Hoss gave me the info you need. It really comes down to makeing a
FakeNormsIndexRea
mean. So OMIT_NORMS probably did
> work. Are you getting the results through hits? Hits will normalize. Use
> topdocs or a hitcollector.
>
> - Mark
>
>
--
View this message in context:
http://www.nabble.com/custom-similarity-based-on-tf-but-greater-than-1.0-tf3037071.html#a8442944
);
}
}
The beauty of this reader is that you can flip between it and your
custom similarity and Lucene's default implementations live on the same
index.
- Mark
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For
System.out.println("writing fake norms...");
> System.arraycopy(ones, 0, result, offset, maxDoc());
> }
> }
>
> The beauty of this reader is that you can flip between it and your
> custom similarity and Lucene's default implementations live on the same
e[] result, int
offset) {
System.out.println("writing fake norms...");
System.arraycopy(ones, 0, result, offset, maxDoc());
}
}
The beauty of this reader is that you can flip between it and your
custom similarity and Lucene's default implementations live on
g though. Let me know if you have any questions.
>
> - Mark
>
>
--
View this message in context:
http://www.nabble.com/custom-similarity-based-on-tf-but-greater-than-1.0-tf3037071.html#a8442395
Sent from the Lucene - Java
Sorry your having trouble find it! Allow me...bingo:
http://www.gossamer-threads.com/lists/lucene/java-user/43251?search_string=sorting%20by%20per%20doc%20hit;#43251
Prob doesn't have great keyword for finding it. That should get you
going though. Let me know if you have any questions.
- Mark
eally comes down to makeing a
> FakeNormsIndexReader. The problem you are having is a result of the
> field size normalization.
>
> - mark
>
--
View this message in context:
http://www.nabble.com/custom-similarity-based-on-tf-but-greater-than-1.0-tf3037071.html#a8441331
Sent from the Lucene - J
I just did the same thing. If you search the list you'll find the thread
where Hoss gave me the info you need. It really comes down to makeing a
FakeNormsIndexReader. The problem you are having is a result of the
field size normalization.
- mark
Vagelis Kotsonis wrote:
Hi all.
I am trying to
greater similarity is equal to 1.0f
and the others are lower than 1.0f
How can I "deactivate" the score normalization?
Thank you!
I want to
--
View this message in context:
http://www.nabble.com/custom-similarity-based-on-tf-but-greater-than-1.0-tf3037071.html#a8440390
Sent from
27 matches
Mail list logo