Hi All,
I am using Solr 8.8.2 in cloud mode, I previously uploaded a config set having
a name abc and created multiple collections based on that config set.
Now I need to update the config set and try to upload the new config set zip
file to Solr and overwrite the previous one as below, I got
Hi, everyone.
First I want say that English is not my first language, so I apologize for any
mistakes.
Recently, we moved from Solr 6 to Solr 8.
now we want to start using nested document in our collections.
we going over our custom plugins in order to make them work for nested document
and we w
The stacktrace looks pretty much like
https://issues.apache.org/jira/browse/SOLR-16110
As a workaround you might be able to upload the configset directly to
zookeeper from command line using 'solr zk upconfig' and reload the collections.
Regards
Steffen
-Original Message-
From: yang ma
Maor, your question makes sense. I remember this discussion a few
years ago. But I don't remember where it end up.
Alexandre,
Do you remember it?
On Wed, Jun 8, 2022 at 10:50 AM maoredri wrote:
> Hi, everyone.
> First I want say that English is not my first language, so I apologize for
> any mis
Hi All,
Our Solr is bottlenecking on write performance (uses lots of cpu, writes
queue up). Looking for some tips on what to look into to figure out if we
can squeeze more write performance out of it without changing the setup too
drastically.
Here's the setup:
* Solr 8.2 (I know, could be upgrad
Hi all, I tested solr:7.5.0 and solr-8-11-1 docker images as single node
and as cluster solr-cluster + zk-cluster , and I found some problems,
maybe it is some new issue.
I configured Solr with a data volume folder /var/solr/data where there are
solr.xml and zoo.cfg files, when I created collecti
Could you please post your docker-compose.yaml?
On Wed, Jun 8, 2022 at 1:36 PM Yurii Aleshchenko <
yurii.aleshche...@kommunion.com> wrote:
> Hi all, I tested solr:7.5.0 and solr-8-11-1 docker images as single node
> and as cluster solr-cluster + zk-cluster , and I found some problems,
> maybe it
On 6/8/2022 3:35 AM, Marius Grigaitis wrote:
* 9 different cores. Each weighs around ~100 MB on disk and has
approximately 90k documents inside each.
* Updating is performed using update method in batches of 1000, around 9
processes in parallel (split by core)
This means that indexing within ea
yes, no problem , attached below:
On 2022/06/08 11:58:12 Vincenzo D'Amore wrote:
> Could you please post your docker-compose.yaml?
>
> On Wed, Jun 8, 2022 at 1:36 PM Yurii Aleshchenko <
> yurii.aleshche...@kommunion.com> wrote:
>
> > Hi all, I tested solr:7.5.0 and solr-8-11-1 docker images as si
Hi,
We are using Solr 7.7.3
We have observed that deleteByQuery parameter causing sudden spikes in JVM
causing OOM.
Can someone please guide me regarding the Solr configuration parameters
which I should check
Thanks
On 6/8/2022 6:45 AM, Parag Ninawe wrote:
We are using Solr 7.7.3
We have observed that deleteByQuery parameter causing sudden spikes in JVM
causing OOM.
What do you know about the OOM? Is it an OS-level OOM or a Java level
OOM? If it's Java, have you seen the actual exception? A whole bunch
On 6/8/2022 4:01 AM, Yurii Aleshchenko wrote:
How can I save my data in docker volume and why solr deleted all
collections, cores on startup ?
It sounds like when you are recreating the setup, that you are starting
with a brand new and empty ZK database.
All the collection configuration is i
I agree with Shawn.
Zookeeper saves the data log into /datalog and I see you missed adding the
volume for /datalog for zookeeper in your docker-compose.
That should do the trick
On Wed, Jun 8, 2022 at 3:21 PM Shawn Heisey wrote:
> On 6/8/2022 4:01 AM, Yurii Aleshchenko wrote:
> > How can I save
thank you very much you and Shawn, it is working
On 2022/06/08 13:51:45 Vincenzo D'Amore wrote:
> I agree with Shawn.
> Zookeeper saves the data log into /datalog and I see you missed
adding the
> volume for /datalog for zookeeper in your docker-compose.
> That should do the trick
>
> On Wed, J
* Go multi threaded for each core as Shawn says. Try e.g. 2, 3 and 4 threads
* Experiment with different batch sizes, e.g. try 500 and 2000 - depends on
your docs what is optimal
* Do NOT commit after each batch of 1000 docs. Instead, commit as seldom as
your requirements allows, e.g. try commitW
> * Do NOT commit after each batch of 1000 docs. Instead, commit as seldom
as your requirements allows, e.g. try commitWithin=6 to commit every
minute
this is the big one. commit after the entire process is done or on a
timer, if you don't need NRT searching, rarely does anyone ever need that
Hi ,
We are currently using Solr 4.9.0 which is connecting Oracle 12cR1 and we are
planning to upgrade our Database to Oracle 19c. So, the question that I have is
- Is SOLR 4.9.0 compatible with Oracle 19c, if not what is the minimum version
of SOLR that supports Oracle 19c database.
Appreciat
> On Jun 8, 2022, at 2:35 PM, Yennam, M wrote:
>
> We are currently using Solr 4.9.0 which is connecting Oracle 12cR1 and we are
> planning to upgrade our Database to Oracle 19c. So, the question that I have
> is – Is SOLR 4.9.0 compatible with Oracle 19c, if not what is the minimum
> versio
I checked the source. There's nothing like this there yet.
On Wed, Jun 8, 2022 at 10:50 AM maoredri wrote:
> Hi, everyone.
> First I want say that English is not my first language, so I apologize for
> any mistakes.
>
> Recently, we moved from Solr 6 to Solr 8.
> now we want to start using neste
Sorry for such a long post.
We have a 4-node SolrCloud running Solr 8.11.1. There are 2 nodes in one
AWS region, and 2 nodes in another region. All nodes are in peered VPC.
All communications between the nodes are direct IP calls (no DNS). One
node in each region holds replicas of multiple coll
Also note that use of Data Import Handler (DIH) is not supported by the
Solr community anymore. DIH has become a separate project (
https://github.com/rohitbemax/dataimporthandler) and seems to be in need of
some folks who care enough to contribute fixes to it. Using another tool or
custom code to
> some folks who care enough to contribute fixes to it. Using another tool or
> custom code to query the database and submit updates via the solr JSON api
> or SolrJ client is currently recommended over DIH.
That’s why I had to write a tool to do the exporting from Oracle, massaging
into JSON, an
I suspect you are hitting this bug...
https://issues.apache.org/jira/browse/SOLR-16203
...but AFAIK that would only happen if you are are explicitly using
ClassicIndexSchemaFactory in your solrconfig.xml ... can you confirm?
Assuming I'm correct, then either switching to ManagedIndexSchemaFa
Hi,
After upgrading from 7.5 to 8.11, the core admin API for renaming core
has stopped working. For e.g. when I try to run this,
https://internal-kp-stage.test.com/solr/admin/cores?action=RENAME&core=knowledge_shard1_replica_n1&other=knowledge
It throws the following error.
{
"responseHeade
On 6/8/2022 5:06 PM, Shamik Bandopadhyay wrote:
"msg": "Not supported in SolrCloud",
Using the CoreAdmin API when running in cloud mode is a REALLY bad
idea. The CoreAdmin API cannot touch information in zookeeper. With
part of the information for a SolrCloud collection being in zookeep
Hi Shawn,
Thanks for the insight. As you've mentioned, renaming the core name in
the core properties file does create unwanted consequences. I did give it a
try in a test environment earlier. Renaming the core is not essential for
us, it's just to add some convenience for a few folks using Solr
On 2022-06-08 3:01 PM, Andy Lester wrote:
On Jun 8, 2022, at 2:35 PM, Yennam, M wrote:
We are currently using Solr 4.9.0 which is connecting Oracle 12cR1 and we are
planning to upgrade our Database to Oracle 19c. So, the question that I have is
– Is SOLR 4.9.0 compatible with Oracle 19c, i
27 matches
Mail list logo