Hello, Rudi.
Well, it doesn't seem perfect. Probably it's can be fixed
via
foo bar,zzz,foo,bar
And in some sort of sense this behavior is reasonable.
Also you can experiment with sow and pf params (the later param is
described in dismax page only).
On Thu, Feb 9, 2023 at 8:19 PM Rudi Seitz wrote:
Hi All,
We have cluster of 30 nodes and each node has 750gb of data.
There are 420 Shards. Shards and data are well distributed with all nodes.
JVM Settings ->
JDK :Amazon.com Inc. OpenJDK 64-Bit Server VM 17.0.1 17.0.1+12-LTS
Processor : 48
JVM Args:
Args
-DSTOP.KEY=solrrocks
-DSTOP.PORT=7983
-D
Rudi,
I agree, this does not seem like how it should behave. Probably
something that could be fixed in edismax, not something lower-level
(Lucene)?
Michael
On Fri, Feb 10, 2023 at 9:38 AM Mikhail Khludnev wrote:
>
> Hello, Rudi.
> Well, it doesn't seem perfect. Probably it's can be fixed
> via
Hi All,
Solr 9.1.1 doesn't currently allow me to make a full restore of a backup
where the data and config set are stored on a S3 bucket. The error I have
received each run is "The specified key does not exist". Additionally, the
full message is:
An AmazonServiceException was thrown! [serviceName
what is a common use case then if it is not the csv type?
how to index massively data into Solr using SolrJ
You can't just read line by line each dataset you want to index.
Le lun. 30 janv. 2023 à 14:11, Jan Høydahl a écrit :
> It's not a common use case for SolrJ to post plain CSV content to So
: what is a common use case then if it is not the csv type?
: how to index massively data into Solr using SolrJ
: You can't just read line by line each dataset you want to index.
There are lots of usecases for using SolrJ that involve programaticlly
generating the SolrInputDocuments you wnat to
As part of migration we converted csv data by creating multiple json files
each consisting around 100mb data and then wrote a small shell script to
inject these files through solr api in loop .
Just make sure when If you have multiple nodes then it might take some time
to get the replication done
@Chris can you provide a sample Java code using ContentStreamUpdateRequest
class?
Le ven. 10 févr. 2023 à 19:22, Chris Hostetter a
écrit :
>
> : what is a common use case then if it is not the csv type?
> : how to index massively data into Solr using SolrJ
> : You can't just read line by line ea
: @Chris can you provide a sample Java code using ContentStreamUpdateRequest
: class?
I mean ... it's a SolrRequest like any other...
1) create an instante
2) add the File you want to add (or pass in some other ContentStream --
maybe StringStream if your CSV is already in memory)
3) process()
Thanks Mikhail and Michael.
Based on your feedback, I created a ticket:
https://issues.apache.org/jira/browse/SOLR-16652
In the ticket, I mentioned why updating the synonym rule or setting
sow=true causes other problems in this case, unfortunately. I haven't yet
looked through code to see where the
10 matches
Mail list logo