I think I've gone pass this. The "client" is not blowing up anymore.
There are something else I'll start a new thread.
On Wed, Dec 6, 2023 at 3:36 PM Vince McMahon
wrote:
> so, I've incrased the SOLR_JAVA_MEM by 8 times. That memory increase is
> from solr status command. But, no luck. the I
so, I've incrased the SOLR_JAVA_MEM by 8 times. That memory increase is
from solr status command. But, no luck. the Indexing process using
full-import with Clean and Commit option says:
*Indexing completed. Added/Updated: 915000 documents. Deleted 0 documents.
(Duration: 20s)*
The solr count qu
Gus, that is possible. How best to fix it?
On Wed, Dec 6, 2023 at 12:05 PM Gus Heck wrote:
> Looks like your client crashed out while trying to receive the response
> perhaps?
>
> Caused by: java.io.IOException: An established connection was aborted
> by the software in your host machine
>
Looks like your client crashed out while trying to receive the response
perhaps?
Caused by: java.io.IOException: An established connection was aborted
by the software in your host machine
at sun.nio.ch.SocketDispatcher.writev0(Native Method) ~[?:?]
at sun.nio.ch.SocketDispatcher.writev(Unk
Thanks for the suggestion.
10 rows work for full-Import and Solr count query confirm 10 imported. :)
I've tried with debug=false and no improvement on larger number of
rows, such as 755000. I reduced the source side from 1 million rows to
just 755000 to test volume; borrowing the idea of "10 ro
Hi
My blind guess is that the exception is related to the huge debug=true
output. Regarding wiping indices - have no idea. Some params might be
defined in solrconfig.xml where /dataimport defined, it's worth to
review it. May another intervening process wipe the index? To summarize:
1. check /data