[ 
https://issues.apache.org/jira/browse/SOLR-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14057955#comment-14057955
 ] 

Mark Miller commented on SOLR-5656:
-----------------------------------

bq. It seems this may be the case but I just want to confirm it: will this 
issue obviate the pointless replication (duplication) of data on a shared file 
system between replicas?

This is just another option. It works both with or without replicas for a 
shard. There are trade offs in failover transparency, time, and query 
throughput depending on what you choose.

Another option I'm about to start pursuing is SOLR-6237 An option to have only 
leaders write and replicas read when using a shared file system with SolrCloud.

I don't yet fully know what trade offs may come up in that.

> Add autoAddReplicas feature for shared file systems.
> ----------------------------------------------------
>
>                 Key: SOLR-5656
>                 URL: https://issues.apache.org/jira/browse/SOLR-5656
>             Project: Solr
>          Issue Type: New Feature
>            Reporter: Mark Miller
>            Assignee: Mark Miller
>         Attachments: SOLR-5656.patch, SOLR-5656.patch, SOLR-5656.patch, 
> SOLR-5656.patch
>
>
> When using HDFS, the Overseer should have the ability to reassign the cores 
> from failed nodes to running nodes.
> Given that the index and transaction logs are in hdfs, it's simple for 
> surviving hardware to take over serving cores for failed hardware.
> There are some tricky issues around having the Overseer handle this for you, 
> but seems a simple first pass is not too difficult.
> This will add another alternative to replicating both with hdfs and solr.
> It shouldn't be specific to hdfs, and would be an option for any shared file 
> system Solr supports.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to