Hi all,

We are currently running a Solr cloud cluster for text-based product
searches, and overall the setup is working well. However, we’ve observed a
few cases where the *product images shown alongside the text search results
are not relevant*. This is due to some internal mapping issues, which in
turn affects user experience.

To improve this, we are exploring a *hybrid search approach* — where we
want to combine both text and image similarity. The idea is:

   -

   Run the primary query based on text (as we currently do).
   -

   Validate the results using image similarity.
   -

   If the text matches but the product image is not relevant, we would
   filter it out before displaying it to the user on the front end.

We would like to know:

   1.

   B*est practices* for handling hybrid text + image search in Solr?
   2.

   Anyone implemented something similar (using Solr’s vector search or
   external embeddings for images)?
   3.

   Any guidance on *indexing image embeddings* in Solr (since Solr 9+
   supports dense vectors) and combining them with text search efficiently?

*Few details fyi:*
Current Solr Version on our Production: v9.6.1
Index Size: ~250 GB
Number of documents: ~180M
Number of Shards: 63
Number of Nodes: 10
Average response time: ~80-100ms

*Thanks & Regards,*
*Uday Kumar*

Reply via email to