If I want RR to work, does this mean the consistency level has to be
each_quorum or all?

On Mon, Nov 21, 2011 at 2:29 AM, Jeremiah Jordan <
jeremiah.jor...@morningstar.com> wrote:

> If hinting is off. Read Repair and Manual Repair are the only ways data
> will get there (just like when a single node is down).
>
> On Nov 20, 2011, at 6:01 AM, Boris Yen wrote:
>
> A quick question, what if DC2 is down, and after a while it comes back on.
> how does the data get sync to DC2 in this case? (assume hint is disable)
>
> Thanks in advance.
>
> On Thu, Nov 17, 2011 at 10:46 AM, Jeremiah Jordan <
> jeremiah.jor...@morningstar.com> wrote:
>
>> Pretty sure data is sent to the coordinating node in DC2 at the same time
>> it is sent to replicas in DC1, so I would think 10's of milliseconds after
>> the transport time to DC2.
>>
>> On Nov 16, 2011, at 3:48 PM, ehers...@gmail.com wrote:
>>
>> On a related note - assuming there are available resources across the
>> board (cpu and memory on every node, low network latency, non-saturated
>> nics/circuits/disks), what's a reasonable expectation for timing on
>> replication? Sub-second? Less than five seconds?
>>
>> Ernie
>>
>> On Wed, Nov 16, 2011 at 4:00 PM, Brian Fleming <bigbrianflem...@gmail.com
>> > wrote:
>>
>>> Great - thanks Jake
>>>
>>> B.
>>>
>>> On Wed, Nov 16, 2011 at 8:40 PM, Jake Luciani <jak...@gmail.com> wrote:
>>>
>>>> the former
>>>>
>>>>
>>>> On Wed, Nov 16, 2011 at 3:33 PM, Brian Fleming <
>>>> bigbrianflem...@gmail.com> wrote:
>>>>
>>>>>
>>>>> Hi All,
>>>>>
>>>>> I have a question about inter-data centre replication : if you have 2
>>>>> Data Centers, each with a local RF of 2 (i.e. total RF of 4) and write to 
>>>>> a
>>>>> node in DC1, how efficient is the replication to DC2 - i.e. is that data :
>>>>>  - replicated over to a single node in DC2 once and internally
>>>>> replicated
>>>>>  or
>>>>>  - replicated explicitly to two separate nodes?
>>>>>
>>>>> Obviously from a LAN resource utilisation perspective, the former
>>>>> would be preferable.
>>>>>
>>>>> Many thanks,
>>>>>
>>>>> Brian
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> http://twitter.com/tjake
>>>>
>>>
>>>
>>
>>
>
>

Reply via email to