Vincenzo,

If you use the Solr Operator <https://solr.apache.org/operator/>, it will
manage the upgrades for you in a safe manner (waiting for x number of
replicas to be healthy before moving onto the next node).

Hopefully the following documentation pages will help:

   - CRD Options for Update Strategy
   
<https://apache.github.io/solr-operator/docs/solr-cloud/solr-cloud-crd.html#update-strategy>
   - Managed Update Logic
   <https://apache.github.io/solr-operator/docs/solr-cloud/managed-updates.html>

You can configure it so that it will upgrade at most 1 Solr Node at a time,
and only have 1 replica of each shard unhealthy at any given time.

- Houston

On Wed, Oct 27, 2021 at 12:29 PM Vincenzo D'Amore <v.dam...@gmail.com>
wrote:

> HI Rob, thanks for your help.
> Do you know if in case of failure (initFailures not empty)
> /solr/admin/cores changes the http status code of the response in 500 (or
> everything that is not 200) ?
>
> On Wed, Oct 27, 2021 at 6:13 PM Robert Pearce <rp3...@gmail.com> wrote:
>
> > Take a look at the cores REST API, something like
> >
> > http://localhost:8983/solr/admin/cores?action=STATUS&wt=json
> >
> > Any failed cores will be in ‘initFailures’; cores which started will be
> > under “status”
> >
> > Rob
> >
> > > On 27 Oct 2021, at 16:28, Vincenzo D'Amore <v.dam...@gmail.com> wrote:
> > >
> > > Hi all,
> > >
> > > when a Solr instance is started I would be sure all the indexes present
> > are
> > > up and running, in other words that the instance is healthy.
> > > The healthy status (aka liveness/readiness) is especially useful when a
> > > Kubernetes SolrCloud cluster has to be restarted for any configuration
> > > management needs and you want to apply your change one node at a time.
> > > AFAIK I can ping only one index at a time, but there is no way out of
> the
> > > box to test that a bunch of indexes are active (green status).
> > > Have you ever faced the same problem? What do you think?
> > >
> > > Best regards,
> > > Vincenzo
> > >
> > > --
> > > Vincenzo D'Amore
> >
>
>
> --
> Vincenzo D'Amore
>

Reply via email to