Hi,
Good Morning.
we have 8+1 solr nodes cluster. Where 1 Indexing node contains all(8) NRT
Primary shards. This is where all indexing happens. Then We have another 8
nodes consisting of one pull replica of each primary shard.
To limit the query on replicas we have done the following changes in
On 10/10/22 01:57, Satya Nand wrote:
*"Do not add the shards parameter to the standard request handler; doing so
may cause search queries may enter an infinite loop. Instead, define a new
request handler that uses the shards parameter, and pass distributed search
requests to that handler."*
so
Hi Shawn,
>
> The standard request handler is usually the one named "/select". You may
> want to add a new handler for this purpose.
We are already using a custom request handler, Actually, there is no
/select handler in our solr config.
your message subject says you are in cloud mode. If tha
On 10/10/22 06:00, Satya Nand wrote:
Yes, we are using solr cloud. The reason I don't want to specify the
shards' names is that a request can be sent to any replica of a shard based
on preference and availability but I specifically want to limit a request
to a PULL-type replica of a shard.
I am
Shawn,
Actually we were using the preference parameter but recently we faced an
issue where 1 pull replica got down(due to gcp machine restart) and
requests started going to the NRT replica.
Machine hosting NRT replica is pretty weak.
That's why I was experimenting with with shards parameter with
On 10/10/22 06:58, Satya Nand wrote:
Actually we were using the preference parameter but recently we faced an
issue where 1 pull replica got down(due to gcp machine restart) and
requests started going to the NRT replica.
Machine hosting NRT replica is pretty weak.
That's why I was experimenting
Thanks Shawn for sharing all possibilities , we will try to evaluate all
these.
On Mon, 10 Oct, 2022, 6:45 pm Shawn Heisey, wrote:
> On 10/10/22 06:58, Satya Nand wrote:
> > Actually we were using the preference parameter but recently we faced an
> > issue where 1 pull replica got down(due to
Had Solr running as a Windows service under the generic 'system' account.
Tried running under a user account with elevated permissions AND also applying
that account to have full security control over the folder in question and
still see the same permission error.
Full error logged:
ERROR (qtp1
On 10/10/22 09:23, Joe Jones (DHCW - Software Development) wrote:
java.security.AccessControlException: access denied ("java.io.FilePermission"
"D:\Solr\backup\node1" "read")
This is saying that it failed to READ that directory. I had expected to
see a failure to WRITE.
Maybe that will be
Exactly. In linux I would just do a 777 for such a directory anyways since no
one outside of the machine can get to it since no solr servers should have
public ip.
> On Oct 10, 2022, at 12:51 PM, Shawn Heisey wrote:
>
> On 10/10/22 09:23, Joe Jones (DHCW - Software Development) wrote:
>> ja
Hello, I am learning more about replication as I maintain a large Solr 6
set of Solr servers configured for Master/Slave.
I noticed during some replication activities in addition to the original
index dir under the core name on the file system is a dir named "index"
with a timestamp. index.. Fi
As I go back through
https://solr.apache.org/guide/6_6/index-replication.html, the picture is
filling in a little more. My guess the tmp dir referenced, is the
index. dir.
Very interested in cases that might generate a full replication. To my
knowledge no optimize commands has been issued agains
Only an optimize or a large fragment merge would cause a large file deposits
there. That’s why “slaves” should always have double the index size available
as solr will decide on its own when to merge or optimize on the master so the
slaves need to be ready for double the size, and the master nee
Hi,
On sematext blog, I read for TLOG replication Interval
The poll time from replica to the master is set to half of the autoCommit
property value or, if autoCommit is not defined, 50% of the autoSoftCommit.
If both are not present it is set to 1500 milliseconds.
No details for PULL replica but
Hi all,
We've deployed solr9 on OpenJDK 17 and it crashed after few hours with
following error:
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x7f834389b332, pid=8997, tid=9025
#
# JRE version: OpenJDK Runtime Environment Microsoft-40354 (17.0.4.1+
I won’t say for certain as I have never seen this but this seems like a garbage
collection situation. Look there first to see if you can cancel that out as the
cause
> On Oct 10, 2022, at 5:59 PM, Jen-Ya Ku wrote:
>
>
> Hi all,
>
> We've deployed solr9 on OpenJDK 17 and it crashed after f
Thanks, Dave.
Looks like PhaseIdealLoop::build_loop_late_post_work are JIT runtime
hotspot compilation stuff?
We got this error right after upgrading solr 9 from solr 8.11.
Thanks,
Jen-Ya
On Mon, Oct 10, 2022 at 3:25 PM Dave wrote:
> I won’t say for certain as I have never seen this but this se
On 10/10/22 15:58, Jen-Ya Ku wrote:
We've deployed solr9 on OpenJDK 17 and it crashed after few hours with
following error:
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x7f834389b332, pid=8997, tid=9025
#
# JRE version: OpenJDK Runtime Environm
On 2022-10-10 4:58 PM, Jen-Ya Ku wrote:
Hi all,
We've deployed solr9 on OpenJDK 17 and it crashed after few hours with
following error:
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x7f834389b332, pid=8997, tid=9025
What are you running it o
Hi,
from what I see you are using a Neural Network implementation as the model
(org.apache.solr.ltr.model.NeuralNetworkModel ?) and I agree is
definitely not the best in terms of explainability
(org.apache.solr.ltr.model.NeuralNetworkModel#explain).
Effectively it just summarizes the layers, the w
20 matches
Mail list logo