Re: multi-term synonym prevents single-term match -- known issue?

2023-02-10 Thread Mikhail Khludnev
Hello, Rudi.
Well, it doesn't seem perfect. Probably it's can be fixed
via
foo bar,zzz,foo,bar
And in some sort of sense this behavior is reasonable.
Also you can experiment with sow and pf params (the later param is
described in dismax page only).

On Thu, Feb 9, 2023 at 8:19 PM Rudi Seitz  wrote:

> Is this known behavior or is it worth a JIRA ticket?
>
> Searching against a text_general field in Solr 9.1, if my edismax query is
> "foo bar" I should be able to get matches for "foo" without "bar" and vice
> versa. However, if there happens to be a synonym rule applied at query
> time, like "foo bar,zzz" I can no longer get single-term matches against
> "foo" or "bar." Both terms are now required, but can occur in either order.
> If we change the text_general analysis chain to apply synonyms at index
> time instead of query time, this behavior goes away and single-term matches
> are again possible.
>
> To reproduce, use the _default configset with "foo bar,zzz" added to
> synonyms.txt. Index these four docs:
>
> {"id":"1", "title_txt":"foo"}
> {"id":"2", "title_txt":"bar"}
> {"id":"3", "title_txt":"foo bar"}
> {"id":"4", "title_txt":"bar foo"}
>
> Issue a query for "foo bar" (i.e.
> defType=edismax&q.op=OR&qf=title_txt&q=foo bar)
> Result: Only docs 3 and 4 come back
>
> Issue a query for "bar foo"
> Result: All four docs come back; the synonym rule is not invoked
>
> Looking at the explain output for "foo bar" we see:
>
> +((title_txt:zzz (+title_txt:foo +title_txt:bar)))
>
>
> Looking at the explain output for "bar foo" we see:
>
> +((title_txt:bar) (title_txt:foo))
>
> So, the observed behavior makes sense according to the low-level query
> structure. But -- is this how it's "supposed" to work?
>
> Why not expand the "foo bar" query like this instead?
>
> +((title_txt:zzz (title_txt:foo title_txt:bar)))
>
> Rudi
>


-- 
Sincerely yours
Mikhail Khludnev
https://t.me/MUST_SEARCH
A caveat: Cyrillic!


Only 1 node report high memory usage

2023-02-10 Thread Gajjar, Jigar
Hi All,

We have cluster of 30 nodes and each node has 750gb of data.
There are 420 Shards. Shards and data are well distributed with all nodes.
JVM Settings ->

JDK :Amazon.com Inc. OpenJDK 64-Bit Server VM 17.0.1 17.0.1+12-LTS
Processor : 48
JVM Args:
Args
-DSTOP.KEY=solrrocks
-DSTOP.PORT=7983
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.port=8986
-Dcom.sun.management.jmxremote.ssl=false
-Denable.packages=true
-Denable.runtime.lib=true
-Djava.net.preferIPv4Stack=true
-Djetty.home=/prod/solrCI/8.11.1-191/solr-8.11.1/server
-Djetty.port=8983
-Djute.maxbuffer=1000
-Dsolr.data.home=
-Dsolr.data.home=/prod/solr_data/inst1
-Dsolr.default.confdir=/prod/solrCI/8.11.1-191/solr-8.11.1/server/solr/configsets/_default/conf
-Dsolr.environment=prod,label=PROD2+PRODUCTION,color=#c9fdd6-Dsolr.install.dir=/prod/solrCI/8.11.1-191/solr-8.11.1
-Dsolr.jetty.inetaccess.excludes=
-Dsolr.jetty.inetaccess.includes=
-Dsolr.log.dir=/prod/solrCI/8.11.1-191/solr-8.11.1/server/logs
-Dsolr.solr.home=/prod/solr_home/inst1
-Duser.timezone=UTC
-DzkClientTimeout=3
-DzkHost=-XX:+UseNUMA-XX:+UseZGC
-XX:-OmitStackTraceInFastThrow
-XX:CompileCommand=exclude,com.github.benmanes.caffeine.cache.BoundedLocalCache::put
-XX:OnOutOfMemoryError=/prod/solrCI/8.11.1-191/solr-8.11.1/bin/oom_solr.sh 8983 
/prod/solrCI/8.11.1-191/solr-8.11.1/server/logs
-XX:SoftMaxHeapSize=64g-Xlog:gc*:file=/prod/solrCI/8.11.1-191/solr-8.11.1/server/logs/solr_gc.log:time,uptime:filecount=9,filesize=20M
-Xms88g
-Xmx88g
-Xss256k

What we observe is only one node shows high usage of heap and other nodes are 
well below threshold.
You can see in attached image.

[cid:image001.png@01D93D2D.6330B760]


Even if we bounce the node or entire cluster same issue comes back and it will 
be the same node which will report high heap usage.
We also try to reload collection but that does not help.
It is also weird that it is only one   node which will get all hit and 
sometimes it just dies.


We compared that machine with all other machine and made sure there is nothing 
different.

If anyone has any pointers to help then it is greatly appreciated.

Please let me know if you need more information.



Thanks,
Jigar Gajjar


Re: multi-term synonym prevents single-term match -- known issue?

2023-02-10 Thread Michael Gibney
Rudi,

I agree, this does not seem like how it should behave. Probably
something that could be fixed in edismax, not something lower-level
(Lucene)?

Michael

On Fri, Feb 10, 2023 at 9:38 AM Mikhail Khludnev  wrote:
>
> Hello, Rudi.
> Well, it doesn't seem perfect. Probably it's can be fixed
> via
> foo bar,zzz,foo,bar
> And in some sort of sense this behavior is reasonable.
> Also you can experiment with sow and pf params (the later param is
> described in dismax page only).
>
> On Thu, Feb 9, 2023 at 8:19 PM Rudi Seitz  wrote:
>
> > Is this known behavior or is it worth a JIRA ticket?
> >
> > Searching against a text_general field in Solr 9.1, if my edismax query is
> > "foo bar" I should be able to get matches for "foo" without "bar" and vice
> > versa. However, if there happens to be a synonym rule applied at query
> > time, like "foo bar,zzz" I can no longer get single-term matches against
> > "foo" or "bar." Both terms are now required, but can occur in either order.
> > If we change the text_general analysis chain to apply synonyms at index
> > time instead of query time, this behavior goes away and single-term matches
> > are again possible.
> >
> > To reproduce, use the _default configset with "foo bar,zzz" added to
> > synonyms.txt. Index these four docs:
> >
> > {"id":"1", "title_txt":"foo"}
> > {"id":"2", "title_txt":"bar"}
> > {"id":"3", "title_txt":"foo bar"}
> > {"id":"4", "title_txt":"bar foo"}
> >
> > Issue a query for "foo bar" (i.e.
> > defType=edismax&q.op=OR&qf=title_txt&q=foo bar)
> > Result: Only docs 3 and 4 come back
> >
> > Issue a query for "bar foo"
> > Result: All four docs come back; the synonym rule is not invoked
> >
> > Looking at the explain output for "foo bar" we see:
> >
> > +((title_txt:zzz (+title_txt:foo +title_txt:bar)))
> >
> >
> > Looking at the explain output for "bar foo" we see:
> >
> > +((title_txt:bar) (title_txt:foo))
> >
> > So, the observed behavior makes sense according to the low-level query
> > structure. But -- is this how it's "supposed" to work?
> >
> > Why not expand the "foo bar" query like this instead?
> >
> > +((title_txt:zzz (title_txt:foo title_txt:bar)))
> >
> > Rudi
> >
>
>
> --
> Sincerely yours
> Mikhail Khludnev
> https://t.me/MUST_SEARCH
> A caveat: Cyrillic!


Couldn't restore the backed up config set with the data from AWS S3

2023-02-10 Thread Hakan Özler
Hi All,

Solr 9.1.1 doesn't currently allow me to make a full restore of a backup
where the data and config set are stored on a S3 bucket. The error I have
received each run is "The specified key does not exist". Additionally, the
full message is:

An AmazonServiceException was thrown! [serviceName=S3] [awsRequestId=2C6]
> [httpStatus=404] [s3ErrorCode=NoSuchKey] [message=The specified key does
> not exist.]


After investigating the problem further, I have found that the path used to
control whether it's a directory or not in the isDirectory

method makes the `S3Client.headObject
`
method panic. On line

324, the
path pointing to a file is transformed into a path leading to a slash. When
a path, for example, is
"path1/path2/backup-name/collection-name/zk_backup_0/configs/config-set-v1/configoverlay.json",
`sanitizedDirPath` adds a *slash* "/" character to the end of the path as
"path1/path2/backup-name/collection-name/zk_backup_0/configs/config-set-v1/configoverlay.json
/". Although I'm able to restore the backup if the cluster already has the
config schema definition in the zk, I cannot restore the backed up config
schema files while creating an empty cluster due to this error.

For the sake of this question, here I am describing the other parts;

Backup definition:

>   
>  class="org.apache.solr.s3.S3BackupRepository" default="false">
>   com.dev.bucket.backup.folder
>   us-east-2
> 
>   


The backup folder structure on S3:

> .
> └── bucket-name
> └── path1
> └── path2
> └── backup-name
> └── collection-name
> ├── backup_0.properties
> ├── index ...

├── shard_backup_metadata
> │   └── md_shard1_0.json
> └── zk_backup_0
> ├── collection_state.json
> └── configs
> └── config-set-v1
> ├── configoverlay.json
> ├── solrconfig.xml
> ├── stopwords.txt
> └── synonyms.txt


The cURL request I use for restore:

curl -i -X POST \
>-H "Content-Type:application/json" \
>-d \
> '{
>   "restore-collection": {
> "name": "backup-name",
> "collection": "collection-name-restored",
> "location": "path1/path2/"
> "repository": "s3-pro",
>   }
> }' \
>  'http://localhost:8983/api/c'
>

Could you please route me to the right direction regarding the issue?

Thanks!
Hakan


Re: How to index a csv dataset into Solr using SolrJ

2023-02-10 Thread marc nicole
what is a common use case then if it is not the csv type?
how to index massively data into Solr using SolrJ
You can't just read line by line each dataset you want to index.

Le lun. 30 janv. 2023 à 14:11, Jan Høydahl  a écrit :

> It's not a common use case for SolrJ to post plain CSV content to Solr.
> SolrJ is used to push SolrInputDocument objects. Maybe there's a way to do
> it by using some Generic request type and overriding content type.. Can you
> explain more what you app will do, where that CSV file comes from in the
> first place and why you'd want to use SolrJ to move it to Solr, rather than
> curl or some other http client lib?
>
> Jan
>
> > 29. jan. 2023 kl. 20:44 skrev marc nicole :
> >
> > The Java code should perform the post. Any piece of code to show to
> better
> > explain this?
> >
> > thanks
> >
> > Le dim. 29 janv. 2023 à 20:29, Jan Høydahl  a
> écrit :
> >
> >> Read csv in your app, create a Solr doc from each line and ingest to
> Solr
> >> in fitting batches. You can use a csv library or just parse each line
> >> yourself if the format is fixed.
> >>
> >> If you need to post csv directly to Solr you’d use a plain http post
> with
> >> content-type csv, but in most cases your app would do that.
> >>
> >> Jan Høydahl
> >>
> >>> 29. jan. 2023 kl. 20:21 skrev marc nicole :
> >>>
> >>> Hi guys,
> >>>
> >>> I can't find a reference on how to index a dataset.csv file into Solr
> >> using
> >>> SolrJ.
> >>> https://solr.apache.org/guide/6_6/using-solrj.html
> >>>
> >>> Thanks.
> >>
>
>


Re: How to index a csv dataset into Solr using SolrJ

2023-02-10 Thread Chris Hostetter

: what is a common use case then if it is not the csv type?
: how to index massively data into Solr using SolrJ
: You can't just read line by line each dataset you want to index.

There are lots of usecases for using SolrJ that involve programaticlly 
generating the SolrInputDocuments you wnat to index in solr -- frequently 
after ready from some normalized /authoritative data store.

If you already have data "on disk" in a format that solr can parse (csv, 
solr's xml, a PDF file you want Solr's extraction module to parse, etc...) 
then that's what the ContentStreamUpdateRequest is for...

https://solr.apache.org/docs/9_1_0/solrj/org/apache/solr/client/solrj/request/ContentStreamUpdateRequest.html

: 
: Le lun. 30 janv. 2023 à 14:11, Jan Høydahl  a écrit :
: 
: > It's not a common use case for SolrJ to post plain CSV content to Solr.
: > SolrJ is used to push SolrInputDocument objects. Maybe there's a way to do
: > it by using some Generic request type and overriding content type.. Can you
: > explain more what you app will do, where that CSV file comes from in the
: > first place and why you'd want to use SolrJ to move it to Solr, rather than
: > curl or some other http client lib?
: >
: > Jan
: >
: > > 29. jan. 2023 kl. 20:44 skrev marc nicole :
: > >
: > > The Java code should perform the post. Any piece of code to show to
: > better
: > > explain this?
: > >
: > > thanks
: > >
: > > Le dim. 29 janv. 2023 à 20:29, Jan Høydahl  a
: > écrit :
: > >
: > >> Read csv in your app, create a Solr doc from each line and ingest to
: > Solr
: > >> in fitting batches. You can use a csv library or just parse each line
: > >> yourself if the format is fixed.
: > >>
: > >> If you need to post csv directly to Solr you’d use a plain http post
: > with
: > >> content-type csv, but in most cases your app would do that.
: > >>
: > >> Jan Høydahl
: > >>
: > >>> 29. jan. 2023 kl. 20:21 skrev marc nicole :
: > >>>
: > >>> Hi guys,
: > >>>
: > >>> I can't find a reference on how to index a dataset.csv file into Solr
: > >> using
: > >>> SolrJ.
: > >>> https://solr.apache.org/guide/6_6/using-solrj.html
: > >>>
: > >>> Thanks.
: > >>
: >
: >
: 

-Hoss
http://www.lucidworks.com/

Re: How to index a csv dataset into Solr using SolrJ

2023-02-10 Thread sambasivarao giddaluri
As part of migration we converted csv data by creating multiple json files
each consisting around 100mb data and then wrote a small shell script to
inject these files through solr api in loop .

Just make sure when If you have multiple nodes then it might take some time
to get the replication done .

On Fri, Feb 10, 2023 at 12:57 PM marc nicole  wrote:

> what is a common use case then if it is not the csv type?
> how to index massively data into Solr using SolrJ
> You can't just read line by line each dataset you want to index.
>
> Le lun. 30 janv. 2023 à 14:11, Jan Høydahl  a
> écrit :
>
> > It's not a common use case for SolrJ to post plain CSV content to Solr.
> > SolrJ is used to push SolrInputDocument objects. Maybe there's a way to
> do
> > it by using some Generic request type and overriding content type.. Can
> you
> > explain more what you app will do, where that CSV file comes from in the
> > first place and why you'd want to use SolrJ to move it to Solr, rather
> than
> > curl or some other http client lib?
> >
> > Jan
> >
> > > 29. jan. 2023 kl. 20:44 skrev marc nicole :
> > >
> > > The Java code should perform the post. Any piece of code to show to
> > better
> > > explain this?
> > >
> > > thanks
> > >
> > > Le dim. 29 janv. 2023 à 20:29, Jan Høydahl  a
> > écrit :
> > >
> > >> Read csv in your app, create a Solr doc from each line and ingest to
> > Solr
> > >> in fitting batches. You can use a csv library or just parse each line
> > >> yourself if the format is fixed.
> > >>
> > >> If you need to post csv directly to Solr you’d use a plain http post
> > with
> > >> content-type csv, but in most cases your app would do that.
> > >>
> > >> Jan Høydahl
> > >>
> > >>> 29. jan. 2023 kl. 20:21 skrev marc nicole :
> > >>>
> > >>> Hi guys,
> > >>>
> > >>> I can't find a reference on how to index a dataset.csv file into Solr
> > >> using
> > >>> SolrJ.
> > >>> https://solr.apache.org/guide/6_6/using-solrj.html
> > >>>
> > >>> Thanks.
> > >>
> >
> >
>


Re: How to index a csv dataset into Solr using SolrJ

2023-02-10 Thread marc nicole
@Chris can you provide a sample Java code using ContentStreamUpdateRequest
class?

Le ven. 10 févr. 2023 à 19:22, Chris Hostetter  a
écrit :

>
> : what is a common use case then if it is not the csv type?
> : how to index massively data into Solr using SolrJ
> : You can't just read line by line each dataset you want to index.
>
> There are lots of usecases for using SolrJ that involve programaticlly
> generating the SolrInputDocuments you wnat to index in solr -- frequently
> after ready from some normalized /authoritative data store.
>
> If you already have data "on disk" in a format that solr can parse (csv,
> solr's xml, a PDF file you want Solr's extraction module to parse, etc...)
> then that's what the ContentStreamUpdateRequest is for...
>
>
> https://solr.apache.org/docs/9_1_0/solrj/org/apache/solr/client/solrj/request/ContentStreamUpdateRequest.html
>
> :
> : Le lun. 30 janv. 2023 à 14:11, Jan Høydahl  a
> écrit :
> :
> : > It's not a common use case for SolrJ to post plain CSV content to Solr.
> : > SolrJ is used to push SolrInputDocument objects. Maybe there's a way
> to do
> : > it by using some Generic request type and overriding content type..
> Can you
> : > explain more what you app will do, where that CSV file comes from in
> the
> : > first place and why you'd want to use SolrJ to move it to Solr, rather
> than
> : > curl or some other http client lib?
> : >
> : > Jan
> : >
> : > > 29. jan. 2023 kl. 20:44 skrev marc nicole :
> : > >
> : > > The Java code should perform the post. Any piece of code to show to
> : > better
> : > > explain this?
> : > >
> : > > thanks
> : > >
> : > > Le dim. 29 janv. 2023 à 20:29, Jan Høydahl  a
> : > écrit :
> : > >
> : > >> Read csv in your app, create a Solr doc from each line and ingest to
> : > Solr
> : > >> in fitting batches. You can use a csv library or just parse each
> line
> : > >> yourself if the format is fixed.
> : > >>
> : > >> If you need to post csv directly to Solr you’d use a plain http post
> : > with
> : > >> content-type csv, but in most cases your app would do that.
> : > >>
> : > >> Jan Høydahl
> : > >>
> : > >>> 29. jan. 2023 kl. 20:21 skrev marc nicole :
> : > >>>
> : > >>> Hi guys,
> : > >>>
> : > >>> I can't find a reference on how to index a dataset.csv file into
> Solr
> : > >> using
> : > >>> SolrJ.
> : > >>> https://solr.apache.org/guide/6_6/using-solrj.html
> : > >>>
> : > >>> Thanks.
> : > >>
> : >
> : >
> :
>
> -Hoss
> http://www.lucidworks.com/


Re: How to index a csv dataset into Solr using SolrJ

2023-02-10 Thread Chris Hostetter
: @Chris can you provide a sample Java code using ContentStreamUpdateRequest
: class?

I mean ... it's a SolrRequest like any other...

1) create an instante

2) add the File you want to add (or pass in some other ContentStream -- 
maybe StringStream if your CSV is already in memory)

3) process() it using your SolrClient


As with most classes in solrj, looking at the the test cases is probably 
the best way to see "sample" code.  (allthough some of them are explictly 
convoluted to test edge cases in the underlying implementation.)


This is probably the simplest one...

hossman@slate:~/lucene/solr [j11] [branch_9_1] $ grep -A5 'new 
ContentStreamUpdateRequest' 
solr/solrj/src/test/org/apache/solr/client/solrj/request/json/JsonQueryRequestIntegrationTest.java
ContentStreamUpdateRequest up = new ContentStreamUpdateRequest("/update");
up.setParam("collection", COLLECTION_NAME);
up.addFile(getFile("solrj/books.csv"), "application/csv");
up.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);
UpdateResponse updateResponse = up.process(cluster.getSolrClient());
assertEquals(0, updateResponse.getStatus());





: 
: Le ven. 10 févr. 2023 à 19:22, Chris Hostetter  a
: écrit :
: 
: >
: > : what is a common use case then if it is not the csv type?
: > : how to index massively data into Solr using SolrJ
: > : You can't just read line by line each dataset you want to index.
: >
: > There are lots of usecases for using SolrJ that involve programaticlly
: > generating the SolrInputDocuments you wnat to index in solr -- frequently
: > after ready from some normalized /authoritative data store.
: >
: > If you already have data "on disk" in a format that solr can parse (csv,
: > solr's xml, a PDF file you want Solr's extraction module to parse, etc...)
: > then that's what the ContentStreamUpdateRequest is for...
: >
: >
: > 
https://solr.apache.org/docs/9_1_0/solrj/org/apache/solr/client/solrj/request/ContentStreamUpdateRequest.html
: >
: > :
: > : Le lun. 30 janv. 2023 à 14:11, Jan Høydahl  a
: > écrit :
: > :
: > : > It's not a common use case for SolrJ to post plain CSV content to Solr.
: > : > SolrJ is used to push SolrInputDocument objects. Maybe there's a way
: > to do
: > : > it by using some Generic request type and overriding content type..
: > Can you
: > : > explain more what you app will do, where that CSV file comes from in
: > the
: > : > first place and why you'd want to use SolrJ to move it to Solr, rather
: > than
: > : > curl or some other http client lib?
: > : >
: > : > Jan
: > : >
: > : > > 29. jan. 2023 kl. 20:44 skrev marc nicole :
: > : > >
: > : > > The Java code should perform the post. Any piece of code to show to
: > : > better
: > : > > explain this?
: > : > >
: > : > > thanks
: > : > >
: > : > > Le dim. 29 janv. 2023 à 20:29, Jan Høydahl  a
: > : > écrit :
: > : > >
: > : > >> Read csv in your app, create a Solr doc from each line and ingest to
: > : > Solr
: > : > >> in fitting batches. You can use a csv library or just parse each
: > line
: > : > >> yourself if the format is fixed.
: > : > >>
: > : > >> If you need to post csv directly to Solr you’d use a plain http post
: > : > with
: > : > >> content-type csv, but in most cases your app would do that.
: > : > >>
: > : > >> Jan Høydahl
: > : > >>
: > : > >>> 29. jan. 2023 kl. 20:21 skrev marc nicole :
: > : > >>>
: > : > >>> Hi guys,
: > : > >>>
: > : > >>> I can't find a reference on how to index a dataset.csv file into
: > Solr
: > : > >> using
: > : > >>> SolrJ.
: > : > >>> https://solr.apache.org/guide/6_6/using-solrj.html
: > : > >>>
: > : > >>> Thanks.
: > : > >>
: > : >
: > : >
: > :
: >
: > -Hoss
: > http://www.lucidworks.com/
: 

-Hoss
http://www.lucidworks.com/

Re: multi-term synonym prevents single-term match -- known issue?

2023-02-10 Thread Rudi Seitz
Thanks Mikhail and Michael.
Based on your feedback, I created a ticket:
https://issues.apache.org/jira/browse/SOLR-16652
In the ticket, I mentioned why updating the synonym rule or setting
sow=true causes other problems in this case, unfortunately. I haven't yet
looked through code to see where the behavior could be changed.
Rudi


On Fri, Feb 10, 2023 at 11:26 AM Michael Gibney 
wrote:

> Rudi,
>
> I agree, this does not seem like how it should behave. Probably
> something that could be fixed in edismax, not something lower-level
> (Lucene)?
>
> Michael
>
> On Fri, Feb 10, 2023 at 9:38 AM Mikhail Khludnev  wrote:
> >
> > Hello, Rudi.
> > Well, it doesn't seem perfect. Probably it's can be fixed
> > via
> > foo bar,zzz,foo,bar
> > And in some sort of sense this behavior is reasonable.
> > Also you can experiment with sow and pf params (the later param is
> > described in dismax page only).
> >
> > On Thu, Feb 9, 2023 at 8:19 PM Rudi Seitz  wrote:
> >
> > > Is this known behavior or is it worth a JIRA ticket?
> > >
> > > Searching against a text_general field in Solr 9.1, if my edismax
> query is
> > > "foo bar" I should be able to get matches for "foo" without "bar" and
> vice
> > > versa. However, if there happens to be a synonym rule applied at query
> > > time, like "foo bar,zzz" I can no longer get single-term matches
> against
> > > "foo" or "bar." Both terms are now required, but can occur in either
> order.
> > > If we change the text_general analysis chain to apply synonyms at index
> > > time instead of query time, this behavior goes away and single-term
> matches
> > > are again possible.
> > >
> > > To reproduce, use the _default configset with "foo bar,zzz" added to
> > > synonyms.txt. Index these four docs:
> > >
> > > {"id":"1", "title_txt":"foo"}
> > > {"id":"2", "title_txt":"bar"}
> > > {"id":"3", "title_txt":"foo bar"}
> > > {"id":"4", "title_txt":"bar foo"}
> > >
> > > Issue a query for "foo bar" (i.e.
> > > defType=edismax&q.op=OR&qf=title_txt&q=foo bar)
> > > Result: Only docs 3 and 4 come back
> > >
> > > Issue a query for "bar foo"
> > > Result: All four docs come back; the synonym rule is not invoked
> > >
> > > Looking at the explain output for "foo bar" we see:
> > >
> > > +((title_txt:zzz (+title_txt:foo +title_txt:bar)))
> > >
> > >
> > > Looking at the explain output for "bar foo" we see:
> > >
> > > +((title_txt:bar) (title_txt:foo))
> > >
> > > So, the observed behavior makes sense according to the low-level query
> > > structure. But -- is this how it's "supposed" to work?
> > >
> > > Why not expand the "foo bar" query like this instead?
> > >
> > > +((title_txt:zzz (title_txt:foo title_txt:bar)))
> > >
> > > Rudi
> > >
> >
> >
> > --
> > Sincerely yours
> > Mikhail Khludnev
> > https://t.me/MUST_SEARCH
> > A caveat: Cyrillic!
>