Hello, i am trying to connect xwiki to remote solr on the same machine
using containers. I have realized that containers are aware of each
other in the docker network however they are unable to connect with java
to create Xwiki’s cores i suppose. On the other hand i can log into
these ports (80
I think Mike mentioned docValues because docValues can save large amounts
of heap memory for some use patterns. I was following up specifically wrt:
1. given the extra info you've provided, you don't appear to actually
_have_ any of the use cases that would benefit from docValues, and
2. only the `
On 7/22/2021 11:53 AM, Jon Morisi wrote:
RE Shawn and Michael,
I am just looking for a way to speed it up. Mike Drob had mentioned docvalues,
which is why I was researching that route.
I am running my search tests from solr admin, no facets, no sorting. I am
using Dsolr.directoryFactory=Hdf
On 7/23/2021 8:14 AM, Shawn Heisey wrote:
Or you could turn the query into a range query, and it would work much
better -- Point types are EXCELLENT for range queries.
You might not know how to do this. Try this query string:
ptokens:[243796009 TO 243796009] AND ptokens:[410512000 TO 41051200
For me personally on any system that swap has been disabled makes a much better
situation, as well as setting the jvm xms and xmx to the exact same value,
being 31gb, NOT higher than that as more actually makes gc slower. Also yeah,
solr on a network disk is going to be slow unless it’s on an ss
On Fri, 23 Jul 2021 at 11:05, Akreeti Agarwal
wrote:
> Classification: Confidential
> Hi All,
>
> I am using SOLR 7.5 Master/Slave architecture in my project. Just wanted
> to know, is there any way in which we can print responses generated by SOLR
> query in some log file i.e. as many solr query
By “get one” I meant a local ssd disk btw, for example
https://www.amazon.com/SK-hynix-Gold-NAND-Internal/dp/B07SNHB4RC/ref=mp_s_a_1_3?dchild=1&keywords=server+ssd&qid=1627050778&sr=8-3
And you’re all set unless I’m missing the reason why you would ever want to use
a network drive aside from hav
Assuming you have an interface to solr in between your ap and solr server, why
not just store the entire result set in json format into a table? It’s fast
reliable and does exactly what you want yes?
> On Jul 23, 2021, at 10:34 AM, Gora Mohanty wrote:
>
> On Fri, 23 Jul 2021 at 11:05, Akreeti
On 7/23/2021 12:00 AM, Kamil Kawka wrote:
Hello, i am trying to connect xwiki to remote solr on the same machine
using containers. I have realized that containers are aware of each
other in the docker network however they are unable to connect with
java to create Xwiki’s cores i suppose. On the
Hello,
We are on Solr 8.3 using SolrCloud. We would like to have all our collections
to have 2 shards and 3 replicas…as per the doc -
https://solr.apache.org/guide/8_3/collection-management.html#collection-management
- one way we can do it is through collection API shown below.
$SOLRHOST/admi
Based on the documentation, Docvalue is applied to the concatenated
strings for sorting.
I am questioning that the facet by the field uses docvalue or not?
Thanks,
Jae
On 2021-07-23 9:53 AM, Shawn Heisey wrote:
...
But the following line from your log does look like it's probably the
whole root of the issue:
Caused by: org.apache.solr.client.solrj.SolrServerException: Server
refused connection at: http://localhost:8983/solr/xwiki_events
Looks like you've g
It does, but it's trappy. It facets on concatenated sort string values, but
any refinement is done on tokenized values. See:
https://issues.apache.org/jira/browse/SOLR-13056
https://issues.apache.org/jira/browse/SOLR-8362
I would not personally recommend faceting on SortableTextField (and
separat
Hi All,
*tl;dr* : running into long GC pauses and solr client socket timeouts when
indexing bulk of documents into solr. Commit strategy in essence is to do
hard commits at the interval of 50k documents (maxDocs=50k) and disable
soft commit altogether during bulk indexing. Simple solr cloud set up
Solr Cloud version is 8.5. I have also attached the solr log with gc
enabled and our app log which shows that there was SocketTimeoutException.
On Fri, Jul 23, 2021 at 2:31 PM Pratik Patel wrote:
> Hi All,
>
> *tl;dr* : running into long GC pauses and solr client socket timeouts
> when indexing
First thing to try is turning on softcommits. You need to open new
searchers while indexing to free up the memory used to support
real-time-get queries. Real-time-get supports queries on uncommitted data,
so to support this a memory component is needed for records that are
indexed, but not yet visi
Thanks Shawn and Dave, some very helpful info in your emails.
I'll continue testing. It's a bit of a tough one because 2nd run queries run
fast, once they're cached.
I found using the fq vs. q, to skip the scoring, interesting. What does the
query in the below email do to improve performance?
Thanks for the response Joel.
We do not use "Real-time-get" queries. Also, we don't query the index while
a particular stage of bulk indexing is going on. Would it still help to
enable soft commits?
On Fri, Jul 23, 2021 at 3:16 PM Joel Bernstein wrote:
> First thing to try is turning on softcom
Whether you use real-time-get or not you still need to soft commit to
release the memory used to support real-time-get.
Joel Bernstein
http://joelsolr.blogspot.com/
On Fri, Jul 23, 2021 at 3:39 PM Pratik Patel wrote:
> Thanks for the response Joel.
>
> We do not use "Real-time-get" queries. A
Interesting! I will certainly test this. What interval would you suggest
for the soft commits? Also, is there a way to disable real-time get so that
we can disable soft commits?
Triggering a soft commit would open new searcher and recreate caches, we
would like to avoid it if possible as there's n
On 7/23/2021 1:29 PM, Jon Morisi wrote:
Thanks Shawn and Dave, some very helpful info in your emails.
I'll continue testing. It's a bit of a tough one because 2nd run queries run
fast, once they're cached.
I found using the fq vs. q, to skip the scoring, interesting. What does the query in th
21 matches
Mail list logo