My bad; should have checked the code: /** * This function executes local and remote reads, and blocks for the results: * * 1. Get the replica locations, sorted by response time according to the snitch * 2. Send a data request to the closest replica, and digest requests to either * a) all the replicas, if read repair is enabled * b) the closest R-1 replicas, where R is the number required to satisfy the ConsistencyLevel * 3. Wait for a response from R replicas * 4. If the digests (if any) match the data return the data * 5. else carry out read repair by getting data from all the nodes. */
On Feb 21, 2014, at 3:10 AM, Duncan Sands <duncan.sa...@gmail.com> wrote: > Hi Graham, > > On 21/02/14 07:54, graham sanderson wrote: >> Note also; that reading at ONE there will be no read repair, since the >> coordinator does not know that another replica has stale data (remember at >> ONE, basically only one node is asked for the answer). > > I don't think this is right. My understanding is that while only one node > will be sent a direct read request, all other replicas will (not on every > query - it depends on the value of read_repair_chance) get a background read > repair request. You can test this experimentally using cqlsh and turning > tracing on: issue a read request many times. Most of the time you will see > that the coordinator sends a message to one node, but from time to time > (depending on read_repair_chance) you will see it sending messages to many > nodes. > > Best wishes, Duncan. > >> >> In practice for our use cases, we always write at LOCAL_QUORUM (failing the >> whole update if that doesn’t work - stale data is OK if >1 node is down), >> and we read at LOCAL_QUORUM, but (because stale data is better than no >> data), we will fall back per read request to LOCAL_ONE if we detect that >> there were insufficient nodes - this lets us cope with 2 down nodes in a 3 >> replica environment (or more if the nodes are not consecutive in the ring). >> >> On Feb 20, 2014, at 11:21 PM, Drew Kutcharian <d...@venarc.com> wrote: >> >>> Hi Guys, >>> >>> I wanted to get some clarification on what happens when you write and read >>> at consistency level 1. Say I have a keyspace with replication factor of 3 >>> and a table which will contain write-once/read-only wide rows. If I write >>> at consistency level 1 and the write happens on node A and I read back at >>> consistency level 1 from another node other than A, say B, will C* return >>> “not found” or will it trigger a read-repair before responding? In >>> addition, what’s the best consistency level for reading/writing >>> write-once/read-only wide rows? >>> >>> Thanks, >>> >>> Drew >>> >> >
smime.p7s
Description: S/MIME cryptographic signature