Obviously QUORUM_OR_ONE is in general no better than ONE. However we hardly EVER fail back to ONE, and we are making a conscious choice. I’m okay with hiding it if it is too tempting, but for insert/append only workloads without deletes or TTL, it is a perfectly good trade off. Why not just use read ONE then you say? well because we do insert very very fast and so drop some mutations… to that end when I say we hardly ever fail back to ONE, with hinting/read repair and speculative reads we can generally prove we have the right data at quorum (we like to know). Note that each partition key may have multiple values written over time, and generally the chances of them being read anywhere close to when they are written is small.
We don’t do this fallback to ONE for everything - it is entirely based on the use case. P.S. I am not familiar with the DowngradingConsistencyRetryPolicy since we use our own (open source) Scala CQL driver, but we only downgrade on read not write which may be where some of your argument is coming from. i.e. our requirement is that we know whether the data stored is good or not, and if we choose to read at quorum it will be or it will fail. we always fail writes if they don’t achieve quorum. > On Oct 11, 2015, at 4:04 PM, Ryan Svihla <r...@foundev.pro> wrote: > > Downgrading Consistency Policy suffers from effectively being the downgraded > consistency policy aka CL one. I think it's helpful to remember that > Consistency Level is effectively a contract on your consistency, if you do > "quorum or one" you're basically CL ONE. Think of it this way, CL ONE usually > successfuly writes to RF nodes, but you're only requiring one to have a > successful write, how is that any different than "quorum or one"? if you only > have one node up it'll be CL ONE, if you have two nodes up it'll be CL > QUORUM. > > This approach somehow accomplishes the worst of both worlds, with the speed > of QUORUM (since it has to fail to downgrade) and the consistency contract of > ONE, it really is a pretty terrible engineering tradeoff. Plus if you're ok > with ONE some of the time you're ok with ONE all the time. > > For clarity, I think downgrading consistency policy should be deprecated, I > think it totally gets people thinking the wrong way about consistency level. > > On Sun, Oct 11, 2015 at 11:48 AM, Eric Stevens <migh...@gmail.com > <mailto:migh...@gmail.com>> wrote: > The DataStax Java driver is based on Netty and is non blocking; if you do any > CQL work you should look into it. At ProtectWise we use it with high write > volumes from Scala/Akka with great success. > > We have a thin Scala wrapper around the Java driver that makes it act more > Scalaish (eg, Scala futures instead of Java futures, string contexts to > construct statements, and so on). This has also let us do some other cool > things like integrate Zipkin tracing at a driver level, and add other utility > like token aware batches, and concurrent token aware batch selects. > > On Sat, Oct 10, 2015 at 2:49 PM Graham Sanderson <gra...@vast.com > <mailto:gra...@vast.com>> wrote: > Cool - yeah we are still on astyanax mode drivers and our own built from > scratch 100% non blocking Scala driver that we used in akka like environments > > Sent from my iPhone > > On Oct 10, 2015, at 12:12 AM, Steve Robenalt <sroben...@highwire.org > <mailto:sroben...@highwire.org>> wrote: > >> Hi Graham, >> >> I've used the Java driver's DowngradingConsistencyRetryPolicy for that in >> cases where it makes sense. >> >> Ref: >> http://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/policies/DowngradingConsistencyRetryPolicy.html >> >> <http://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/policies/DowngradingConsistencyRetryPolicy.html> >> >> Steve >> >> >> >> On Fri, Oct 9, 2015 at 6:06 PM, Graham Sanderson <gra...@vast.com >> <mailto:gra...@vast.com>> wrote: >> Actually maybe I'll open a JIRA issue for a (local)quorum_or_one consistency >> level... It should be trivial to implement on server side with exist >> timeouts ... I'll need to check the CQL protocol to see if there is a good >> place to indicate you didn't reach quorum (in time) >> >> Sent from my iPhone >> >> On Oct 9, 2015, at 8:02 PM, Graham Sanderson <gra...@vast.com >> <mailto:gra...@vast.com>> wrote: >> >>> Most of our writes are not user facing so local_quorum is good... We also >>> read at local_quorum because we prefer guaranteed consistency... But we >>> very quickly fall back to local_one in the cases where some data fast is >>> better than a failure. Currently we do that on a per read basis but we >>> could I suppose detect a pattern or just look at the gossip to decide to go >>> en masse into a degraded read mode >>> >>> Sent from my iPhone >>> >>> On Oct 9, 2015, at 5:39 PM, Steve Robenalt <sroben...@highwire.org >>> <mailto:sroben...@highwire.org>> wrote: >>> >>>> Hi Brice, >>>> >>>> I agree with your nit-picky comment, particularly with respect to the OP's >>>> emphasis, but there are many cases where read at ONE is sufficient and >>>> performance is "better enough" to justify the possibility of a wrong >>>> result. As with anything Cassandra, it's highly dependent on the nature of >>>> the workload. >>>> >>>> Steve >>>> >>>> >>>> On Fri, Oct 9, 2015 at 12:36 PM, Brice Dutheil <brice.duth...@gmail.com >>>> <mailto:brice.duth...@gmail.com>> wrote: >>>> On Fri, Oct 9, 2015 at 2:27 AM, Steve Robenalt <sroben...@highwire.org >>>> <mailto:sroben...@highwire.org>> wrote: >>>> >>>> >>>> >>>> In general, if you write at QUORUM and read at ONE (or LOCAL variants >>>> thereof if you have multiple data centers), your apps will work well >>>> despite the theoretical consistency issues. >>>> >>>> >>>> Nit-picky comment : if consistency is something important then reading at >>>> QUORUM is important. If read is ONE then the read operation may not see >>>> important update. The safest option is QUORUM for both write and read. >>>> Then depending on the business or feature the consistency may be tuned. >>>> >>>> — Brice >>>> >>>> >>>> >>>> >>>> -- >>>> Steve Robenalt >>>> Software Architect >>>> sroben...@highwire.org <mailto:bza...@highwire.org> >>>> (office/cell): 916-505-1785 <tel:916-505-1785> >>>> >>>> HighWire Press, Inc. >>>> 425 Broadway St, Redwood City, CA 94063 >>>> www.highwire.org <http://www.highwire.org/> >>>> >>>> Technology for Scholarly Communication >> >> >> >> -- >> Steve Robenalt >> Software Architect >> sroben...@highwire.org <mailto:bza...@highwire.org> >> (office/cell): 916-505-1785 <tel:916-505-1785> >> >> HighWire Press, Inc. >> 425 Broadway St, Redwood City, CA 94063 >> www.highwire.org <http://www.highwire.org/> >> >> Technology for Scholarly Communication > > > > -- > > Thanks, > > Ryan Svihla > >
smime.p7s
Description: S/MIME cryptographic signature