cpoerschke commented on code in PR #3318: URL: https://github.com/apache/solr/pull/3318#discussion_r2071943891
########## solr/solr-ref-guide/modules/getting-started/pages/tutorial-lsr.adoc: ########## @@ -0,0 +1,476 @@ += Exercise 6: Using Learned Sparse Retrieval (LSR) +:experimental: +:tabs-sync-option: +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +[[exercise-6]] +== Exercise 6: Using Learned Sparse Retrieval (LSR) in Solr + +... + +https://en.wikipedia.org/wiki/Learned_sparse_retrieval[Learned sparse retrieval] + +... + +=== Getting Ready + +[,console] +---- +bin/solr start -noprompt -e cloud +---- + +... + +[,console] +---- +bin/solr create -c buzz +---- + + +=== Placeholder section with draft content + +terminology? -- transparent? explainable? interpretable? -- why did this document (not) match? why did this document get this score? why did document X score higher than document Y? + +Somewhat == can explain why document got the score but can less so explain why the term weights are what they are + + +[cols=",,,,,",options="header",] +|=== +|Approach |transparent w.r.t. match |term expansion |term weights |transparent w.r.t. score |Large Language Model involved +|?lexical? |Yes |No |No |Yes |No +|?lexical? with curated synonyms |Yes |Yes |if curated |Yes |No +|learned sparse |Yes |Yes |learned |Somewhat |Yes +|dense vector |No |n/a |n/a |No |Yes +|=== + +terminology? -- value? weight? score? something else? + +[cols=",,,,",options="header",] +|=== +|Approach |Value Indexed |Value Retrievable |Value Storage Precision |Extra(?) data structure since not internally encoded as term frequency +|raw term frequency similarity field |Yes |Yes |Y |N +|payloaded fields |Yes |Yes |Y |Y +|(plain) rank fields |Yes |No |No |N +|(opaque) rank fields |Yes |No |No |N +|=== + + Review Comment: > > This is very cool. I'd love to test drive the tutorial when you are ready. A sentence describing why LSR is useful might be nice in the tutorial. The wikipedia page was, ahem, very "sparse" in telling me why this is cool...! > > And expanding the "why" is literally one of your todo items! sigh! This draft section here is incremental progress to add more detail, based on my current (and relatively limited) understanding etc. Also tagging @seanmacavaney and @alessandrobenedetti as folks perhaps interested in contributing to this, as and when time permits. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
