Hi, I am curious to know how people practically use Snapshot restore provided that snapshot restore may lead to inconsistent reads until full repair is run on the node being restored ( if you have dropped mutations in your cluster).
Example: 9 am snapshot taken on all 3 nodes10 am mutation drop on node 3 11 am snapshot restore on node 1. Now the data is only on node 2 if we are writing at quorum and we will observe inconsistent reads till we repair node 1. If you use restore snapshot with join_ring equal to false, repair the node and then join the restored node when repair completes, the node will not lead to inconsistent reads but will miss writes during the time its being repaired as simply booting the node with join_ring=false would also stop pushing writes to the node ( unlike boostrap with join_ring=false where writes are pushed to the node being bootstrapped) and thus you would need another full repair to make data of the node restored via snapshot in sync with other nodes. Its hard to believe that a simple snapshot restore scenario is still broken and people are not complaining. So, I thought of asking the community members..how do you practically use snapshot restore while addressing the read inconsistency issue. ThanksAnuj