It appears that: "alter table test.test_root with speculative_retry =
'NONE';" is also valid.

Seems a bit more definitive :)

On Wed, Sep 9, 2015 at 12:11 PM, Eric Plowe <eric.pl...@gmail.com> wrote:

> Yeah, that's what I did. Just wanted to verify it that will indeed turn it
> off.
>
> On Wednesday, September 9, 2015, Laing, Michael <michael.la...@nytimes.com>
> wrote:
>
>> "alter table test.test_root WITH speculative_retry = '0.0PERCENTILE';"
>>
>> seemed to work for me with C* version 2.1.7
>>
>> On Wed, Sep 9, 2015 at 10:11 AM, Eric Plowe <eric.pl...@gmail.com> wrote:
>>
>>> Would this work:
>>>
>>> ALTER TABLE session_state WITH speculative_retry = '0ms';
>>> ALTER TABLE session_state WITH speculative_retry = '0PERCENTILE';
>>>
>>> I can't set it to 0, but was wondering if these would have the same
>>> effect?
>>>
>>> ~Eric
>>>
>>> On Wed, Sep 9, 2015 at 8:19 AM, Eric Plowe <eric.pl...@gmail.com> wrote:
>>>
>>>> Interesting. I'll give it a try and report back my findings.
>>>>
>>>> Thank you, Michael.
>>>>
>>>>
>>>> On Wednesday, September 9, 2015, Laing, Michael <
>>>> michael.la...@nytimes.com> wrote:
>>>>
>>>>> Perhaps a variation on
>>>>> https://issues.apache.org/jira/browse/CASSANDRA-9753?
>>>>>
>>>>> You could try setting speculative retry to 0 to avoid cross-dc reads.
>>>>>
>>>>> On Wed, Sep 9, 2015 at 7:55 AM, Eric Plowe <eric.pl...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> read_repair_chance: 0
>>>>>> dclocal_read_repair_chance: 0.1
>>>>>>
>>>>>>
>>>>>> On Wednesday, September 9, 2015, Laing, Michael <
>>>>>> michael.la...@nytimes.com> wrote:
>>>>>>
>>>>>>> What are your read repair settings?
>>>>>>>
>>>>>>> On Tue, Sep 8, 2015 at 9:28 PM, Eric Plowe <eric.pl...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> To further expand. We have two data centers, Miami and Dallas.
>>>>>>>> Dallas is our disaster recovery data center. The cluster has 12 nodes, 
>>>>>>>> 6 in
>>>>>>>> Miami and 6 in Dallas. The servers in Miami only read/write to Miami 
>>>>>>>> using
>>>>>>>> data center aware load balancing policy of the driver. We have the 
>>>>>>>> problem
>>>>>>>> when writing and reading to the Miami cluster with LOCAL_QUORUM.
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>>
>>>>>>>> Eric
>>>>>>>>
>>>>>>>> On Tuesday, September 8, 2015, Eric Plowe <eric.pl...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Rob,
>>>>>>>>>
>>>>>>>>> All writes/reads are happening from DC1. DC2 is a backup. The web
>>>>>>>>> app does not handle live requests from DC2.
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>>
>>>>>>>>> Eric Plowe
>>>>>>>>>
>>>>>>>>> On Tuesday, September 8, 2015, Robert Coli <rc...@eventbrite.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> On Tue, Sep 8, 2015 at 4:40 PM, Eric Plowe <eric.pl...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> I'm using Cassandra as a storage mechanism for session state
>>>>>>>>>>> persistence for an ASP.NET web application. I am seeing issues
>>>>>>>>>>> where the session state is persisted on one page (setting a value:
>>>>>>>>>>> Session["key"] = "value" and when it redirects to another (from a 
>>>>>>>>>>> post back
>>>>>>>>>>> event) and check for the existence of the value that was set, it 
>>>>>>>>>>> doesn't
>>>>>>>>>>> exist.
>>>>>>>>>>>
>>>>>>>>>>> It's a 12 node cluster with 2 data centers (6 and 6) running
>>>>>>>>>>> 2.1.9. The key space that the column family lives has a RF of 3
>>>>>>>>>>> for each data center. The session state provider is using the the 
>>>>>>>>>>> datastax
>>>>>>>>>>> csharp driver v2.1.6. Writes and reads are at LOCAL_QUORUM.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 1) Write to DC_A with LOCAL_QUORUM
>>>>>>>>>> 2) Replication to DC_B takes longer than it takes to...
>>>>>>>>>> 3) Read from DC_B with LOCAL_QUORUM, do not see the write from 1)
>>>>>>>>>>
>>>>>>>>>> If you want to be able to read your writes from DC_A in DC_B,
>>>>>>>>>> you're going to need to use EACH_QUORUM.
>>>>>>>>>>
>>>>>>>>>> =Rob
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>
>>>>>
>>>
>>

Reply via email to