A new CDCR architecture is discussed in SIP-13
https://cwiki.apache.org/confluence/display/SOLR/SIP-13:+Cross+Data+Center+Replication
and being worked on in the sandbox repo
https://github.com/apache/solr-sandbox/tree/crossdc-wip
You should check with Anshum and Mark for details.
Jan
> 1. jul
Hi,
With mult tenants, scaling on the #tenants axis will be simply adding new
collections to the cluster. That should be fairly simple with K8S and
SolrOperator. First add N new nodes to your EKS cluster, then use --scale in
your SolrOperator to add more PODs, which will then pop up as "empty"
Hi,
The Admin UI will send Ajax requests to Solr and to do that it needs the basic
auth credentials.
With Solr's built-in Auth, Admin UI will remember the creds and explicitly pass
them on every Ajax request.
But for 3rd party auth in nginx, Admin UI will be able to do that.
Perhaps look for a B
Hi,
This will not cause any issues. You should however configure all three ZK hosts
in your ZK_HOST setting for Solr.
Beware that Solr's connecton to Zookeeper does NOT support dynamic
configuration as provided by Zookeeper, i.e. if you e.g. want to resize your ZK
cluster you will still have t
I need to backup to a network file system to support recovery. I do not
want the index on a network file system, so just mounting /var/solr/data
isn't an option. I have attempted to set the location in the replication
handler, but it is not working. I've tried all of these configurations.
Thank you very much for the reply.
On Fri, Aug 5, 2022 at 5:43 PM Jan Høydahl wrote:
> Hi,
>
> This will not cause any issues. You should however configure all three ZK
> hosts in your ZK_HOST setting for Solr.
>
> Beware that Solr's connecton to Zookeeper does NOT support dynamic
> configuratio
On 8/5/22 07:00, Thomas Woodard wrote:
optimize
optimize
2
00:00:20
/var/i8s/backup/solr/${i8s.environment}/${
solr.core.name}
The backups after optimize are happening, but they are going to the default
locations, not the configured location. For
On 8/5/22 07:42, Shawn Heisey wrote:
I've confirmed that it isn't a path security issue, by verifying that all
paths are allowed:
2022-08-05 12:29:03.873 INFO (main) [ ] o.a.s.c.CoreContainer Allowing
use of paths: [_ALL_]
I missed this part of your email until after I had already sent my ot
That is exactly what I was afraid of. Not being able to configure where
automated backups go seems like a pretty major oversight, though. Is anyone
aware of a solution other than creating a bunch of soft links?
On Fri, Aug 5, 2022 at 8:52 AM Shawn Heisey wrote:
> On 8/5/22 07:42, Shawn Heisey wr
Can’t you just make a cron job that runs an sh file that does a cp-rf on the
data folder with a time stamp? The indexes are drop in when needed
> On Aug 5, 2022, at 12:07 PM, Thomas Woodard wrote:
>
> That is exactly what I was afraid of. Not being able to configure where
> automated backups
Actually, soft links won't work either, because the snapshots aren't in a
subdirectory of data, and each one has a different name.
Cron on ec2 is a bit of a pain, but yes, that does seem like the
best solution available.
On Fri, Aug 5, 2022 at 11:15 AM Dave wrote:
> Can’t you just make a cron j
On 8/5/22 10:06, Thomas Woodard wrote:
That is exactly what I was afraid of. Not being able to configure where
automated backups go seems like a pretty major oversight, though. Is anyone
aware of a solution other than creating a bunch of soft links?
The symlink idea I had (but haven't mentioned
If you have any metal, a cron doing an rsync against ec2 may work well, hell
you could do that with a cheap laptop that has a large hard drive running linux
that is plugged in and doesn’t sleep. Enterprise? No. Works? Certainly
> On Aug 5, 2022, at 12:31 PM, Thomas Woodard wrote:
>
> Actuall
Thanks for the rapid replies. I've opened
https://issues.apache.org/jira/browse/SOLR-16326 and will proceed with
scripting a scheduled backup instead.
On Fri, Aug 5, 2022 at 11:36 AM Shawn Heisey wrote:
> On 8/5/22 10:06, Thomas Woodard wrote:
> > That is exactly what I was afraid of. Not being
Just looked at some other handler configurations, I think you may suffer
from a typo... should
/var/i8s/backup/solr/${i8s.environment}/${
solr.core.name}
have been
/var/i8s/backup/solr/${i8s.environment}/${
solr.core.name}
(note the s)
On Fri, Aug 5, 2022 at 1:05 PM Thom
Yup, I absolutely did typo when I tried to do it as a default. I'll update
my issue to correct that.
On Fri, Aug 5, 2022 at 12:31 PM Gus Heck wrote:
> Just looked at some other handler configurations, I think you may suffer
> from a typo... should
>
>
> /var/i8s/backup/solr/${i8s.environm
On 8/5/22 11:56, Thomas Woodard wrote:
Yup, I absolutely did typo when I tried to do it as a default. I'll update
my issue to correct that.
It will be interesting to see whether fixing the typo makes it work.
Sometimes the code is hard to decipher, and it is always possible that
it does appl
If it doesn't apply the defaults that's the bug right there I think.
On Fri, Aug 5, 2022 at 2:10 PM Shawn Heisey wrote:
> On 8/5/22 11:56, Thomas Woodard wrote:
> > Yup, I absolutely did typo when I tried to do it as a default. I'll
> update
> > my issue to correct that.
>
> It will be interesti
hi,
We recently migrated from solr 6.x to 8.11. We reindexed the data on solr
8.11 binaries.
We have a master slave configuration. The indexing happens on the master (
LEADER)and replicates to the slaves ( follower)
We have around 8 cores on each server, they are like 8 collections,
serving di
On 8/5/22 14:21, Surya R wrote:
When the solr dameon is restarted, the cores do appear on the admin
console, but when a query is hit against the core immediately, We dont get
a response, it spins for like 20 seconds and then only after i see the
below message in the log, i get the results. Why is
Unfortunately in my architecture I cannot rely on a database and on a
updated/created
time field. There is a potentially infinite stream of documents with a
possible huge amount of duplication.
So avoid the indexing of the duplicate documents (I suppose) should improve
the performance.
On Fri, 5 A
While looking into a problem described on the #solr slack channel, I
tried to have Solr optimize my core. It seems to have completely ignored
the command. I am running 9.1.0-SNAPSHOT, compiled from branch_9x.
The user on slack also tried to optimize their index, running version
8.11.2, and th
I recently hit this problem on 8.11.1. It was a tiny test index with 2
segments. One of the segments *might* have been from 7.x. I wanted to
optimize and rewrite the index into a single segment in 8.x . But optimize
didn’t work.
In this case though, before optimize, numDocs=maxDocs. So I thought
I have just enabled DMARC rejection for my domain. Hoping that messages
to the list can still get through.
This error has happened again. Does anyone yet have any explanation or
suggestion?
-Original Message-
From: Oakley, Craig (NIH/NLM/NCBI) [C]
Sent: Monday, May 02, 2022 2:29 PM
To: users@solr.apache.org
Subject: Re: IllegalArgumentException: Unknown directory
This has happened several mo
25 matches
Mail list logo