Hello, Sergio.
debug=query just displays what ToParentBlockJoinQuery.explain() yields.
>From the beginning ToParentBlockJoinQuery.explain() lacks functionality
regarding scroreMode that was fixed in Lucene some time ago
https://github.com/apache/lucene/pull/12245. Perhaps the issue you
encounter is fixed in a certain version of Solr which carries a fixed
Lucene. It's just an idea. I'm not able to trace particular versions.
On Fri, Sep 29, 2023 at 3:28 PM Sergio García Maroto <marot...@gmail.com>
wrote:

> Hi team,
>
> Lately I have been using some ranking using blockjoin querries on nested
> documents.
> I haved added *score=total *to the queries which actually rings scores to
> the parent.
> When trying to understand results I see somwhow enabeling debug info only
> return first or best match as you can see below.
> Below results matchrd 17 children but only displays best match. Is it
> possible to see the full debug information?
>
> Thanks
> Regards,
> Sergio Maroto
>
> <str name="8180777"> 239.1634 = sum of: 236.98985 = sum of: 236.98985 =
> Score based on *17 child docs in range from 80573436 to 80573514, best
> match:* 224.95201 = sum of: 224.5116 = sum of: 0.35355338 =
> weight(FunctionListFreeTextNS:finance in 315379) [SchemaSimilarity], result
> of: 0.35355338 = score(freq=1.0), product of: 1.0 = idf, computed as
> log((docCount+1)/(docFreq+1)) + 1 from: 34343 = docFreq, number of
> documents containing term 259843 = docCount, total number of documents with
> field 1.0 = tf(freq=1.0), with freq of: 1.0 = freq, occurrences of term
> within document 0.35355338 = fieldNorm 224.15805 = sum of: 7.071068 =
> weight(FunctionListFreeTextNS:finance in 315379) [SchemaSimilarity], result
> of: 7.071068 = score(freq=1.0), product of: 20.0 = boost 1.0 = idf,
> computed as log((docCount+1)/(docFreq+1)) + 1 from: 34343 = docFreq, number
> of documents containing term 259843 = docCount, total number of documents
> with field 1.0 = tf(freq=1.0), with freq of: 1.0 = freq, occurrences of
> term within document 0.35355338 = fieldNorm 14.353563 = weight(CurrentNSD:T
> in 315379) [SchemaSimilarity], result of: 14.353563 = score(freq=1.0),
> computed as boost * idf * tf from: 20.0 = boost 1.578892 = idf, computed as
> log(1 + (N - n + 0.5) / (n + 0.5)) from: 7128118 = n, number of documents
> containing term 34568376 = N, total number of documents with field
> 0.45454544 = tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl))
> from: 1.0 = freq, occurrences of term within document 1.2 = k1, term
> saturation parameter 0.75 = b, length normalization parameter 1.0 = dl,
> length of field 1.0 = avgdl, average length of field 2.7334156 =
> weight(PrimaryNS:T in 315379) [SchemaSimilarity], result of: 2.7334156 =
> score(freq=1.0), computed as boost * idf * tf from: 20.0 = boost 0.30067572
> = idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from: 22841134 = n,
> number of documents containing term 30853147 = N, total number of documents
> with field 0.45454544 = tf, computed as freq / (freq + k1 * (1 - b + b * dl
> / avgdl)) from: 1.0 = freq, occurrences of term within document 1.2 = k1,
> term saturation parameter 0.75 = b, length normalization parameter 1.0 =
> dl, length of field 1.0 = avgdl, average length of field 200.0 = sum of:
> 200.0 = JobBucketND:[0 TO 3]^200.0 0.44041252 = weight(type_level:job in
> 315379) [SchemaSimilarity], result of: 0.44041252 = score(freq=1.0),
> computed as boost * idf * tf from: 0.9689076 = idf, computed as log(1 + (N
> - n + 0.5) / (n + 0.5)) from: 30853147 = n, number of documents containing
> term 81300026 = N, total number of documents with field 0.45454544 = tf,
> computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from: 1.0 = freq,
> occurrences of term within document 1.2 = k1, term saturation parameter
> 0.75 = b, length normalization parameter 1.0 = dl, length of field 1.0 =
> avgdl, average length of field 2.173555 = sum of: 1.0 = sum of: 1.0 = *:*
> 1.173555 = weight(type_level:parent in 315400) [SchemaSimilarity], result
> of: 1.173555 = score(freq=1.0), computed as boost * idf * tf from:
> 2.5818212 = idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:
> 6149219 = n, number of documents containing term 81300026 = N, total number
> of documents with field 0.45454544 = tf, computed as freq / (freq + k1 * (1
> - b + b * dl / avgdl)) from: 1.0 = freq, occurrences of term within
> document 1.2 = k1, term saturation parameter 0.75 = b, length normalization
> parameter 1.0 = dl, length of field 1.0 = avgdl, average length of field </
> str>
>


-- 
Sincerely yours
Mikhail Khludnev

Reply via email to