Dear folks,
I have doubt on how Cassandra performs a write request; I have two
scenarios, please read them and ensure which one is correct?
Assume we have cluster consists of 4 nodes N1, N2, N3, and N4. As Cassandra
distributes the nodes in ring topology, the nodes links as following:
N-->N-->
Hi folks,
Cassandra provides *linearizable consistency (CAS, Compare-and-Set) by
using Paxos 4 round-trips as following*
*1. **Prepare/promise*
*2. **Read/result*
*3. **Propose/accept*
*4. **Commit/acknowledgment *
Assume we have an application for resistering new account
is, N1 will restart the whole algorithm with a new ballot).
>
>
> On Tue, Aug 25, 2015 at 1:54 PM, ibrahim El-sanosi <
> ibrahimsaba...@gmail.com> wrote:
>
>> Hi folks,
>>
>>
>> Cassandra provides *linearizable consistency (CAS, Compare-and-Set) by
>&g
What an excellent explanation!!, thank you a lot.
By the way, I do not understand why in lightweight transactions in
Cassandra has round-trip commit/acknowledgment?
For me, I think we can commit the value within phase propose/accept. Do you
agree? If not agree can you explain why we need commit/a
Hi folks,
To achieve linearizable consistency in Cassandra, there are four
round-trips must be performed:
1. Prepare/promise
2. Read/result
3. Propose/accept
*4. **Commit/acknowledgment *
In the last phase in Paxos protocol (white paper), there is decide phase
only,
Yes, Sylvain, your answer makes more sense. The phase is in Paxos protocol
sometimes called learning or decide phase, BUT this phase does not have
acknowledgment round, just learning or decide message from the proposer to
learners. So why we need acknowledgment round with commit phase in
lightweigh
AM, Sylvain Lebresne
wrote:
> On Wed, Aug 26, 2015 at 12:19 PM, ibrahim El-sanosi <
> ibrahimsaba...@gmail.com> wrote:
>
>> Yes, Sylvain, your answer makes more sense. The phase is in Paxos
>> protocol sometimes called learning or decide phase, BUT this phase does not
&
Thank you lot
Ibrahim
On Wed, Aug 26, 2015 at 12:15 PM, Sylvain Lebresne
wrote:
> Yes
>
> On Wed, Aug 26, 2015 at 1:05 PM, ibrahim El-sanosi <
> ibrahimsaba...@gmail.com> wrote:
>
>> OK. I see what the purpose of acknowledgment round here. So
>> acknow
you.
Regards,
Ibrahim
On Wed, Aug 26, 2015 at 12:19 PM, ibrahim El-sanosi <
ibrahimsaba...@gmail.com> wrote:
> Thank you lot
>
> Ibrahim
>
> On Wed, Aug 26, 2015 at 12:15 PM, Sylvain Lebresne
> wrote:
>
>> Yes
>>
>> On Wed, Aug 26, 2015 a
Dear folks,
When we hear about the notion of Last-Write-Wins in Cassandra according to
timestamp, *who does generate this timestamp during the write, coordinator
or each individual replica in which the write is going to be stored?*
*Regards,*
*Ibrahim*
Ok, why coordinator does generate timesamp, as the write is a part of
Cassandra process after client submit the request to Cassandra?
On Fri, Sep 4, 2015 at 6:29 PM, Andrey Ilinykh wrote:
> Your application.
>
> On Fri, Sep 4, 2015 at 10:26 AM, ibrahim El-sanosi <
> ibrahimsa
linykh
>> wrote:
>>
>>> Your application.
>>>
>>> On Fri, Sep 4, 2015 at 10:26 AM, ibrahim El-sanosi <
>>> ibrahimsaba...@gmail.com> wrote:
>>>
>>>> Dear folks,
>>>>
>>>> When we hear about the notion
In this cases, assume we have 4-nodes cluster N1, N2, N3, and N4 and
replication factor is 3. Client c1 sends W1 = [k1,V1] to N1 (a
coordinator). A coordinator (N1) generates timestamp Mon 05-09-2015
11:30:40,200 (according to its local clock) and assigned it to W1 and sends
the W1 to the N2, N3,
Hi folks,
Assume we have 4-nodes cluster N1, N2, N3, and N4 and replication factor is
3. When write CL =ALL and read CL=ONE:
Client c1 sends W1 = [k1,V1] to N1 (a coordinator). A coordinator (N1)
generates timestamp Mon 05-09-2015 11:30:40,200 (according to its local
clock) and assigned it to W
gt; On Sun, Sep 6, 2015 at 7:28 AM, ibrahim El-sanosi <
> ibrahimsaba...@gmail.com> wrote:
>
>> Hi folks,
>>
>> Assume we have 4-nodes cluster N1, N2, N3, and N4 and replication factor
>> is 3. When write CL =ALL and read CL=ONE:
>>
>> Client
o, or could you refer me to any related article?
>
> Thank you
>
>
> Ibrahim
>
> On Sun, Sep 6, 2015 at 1:00 PM, Laing, Michael
> wrote:
>
> I think I saw this before.
>
> Clocks must be synchronized.
>
> On Sun, Sep 6, 2015 at 7:28 AM, ibrahim El-sanosi <
&g
t;clocks should be synchronized", it includes
> Cassandra nodes AND clients
>
> NTP is the way to go
>
> Le 6 sept. 2015 à 14:56, Laing, Michael a
> écrit :
>
> https://en.wikipedia.org/wiki/Network_Time_Protocol
>
> On Sun, Sep 6, 2015 at 8:23 AM, ibrahim El
:57 PM, Jeff Jirsa
wrote:
> In the cases where NTP and client timestamps with microsecond resolution
> is insufficient, LWT “IF EXISTS, IF NOT EXISTS” is generally used.
>
>
> From: ibrahim El-sanosi
> Reply-To: "user@cassandra.apache.org"
> Date: Sunday, September
here are no vector clocks to allow you to manage
> the conflict on your own at this point.
>
>
> From: ibrahim El-sanosi
> Reply-To: "user@cassandra.apache.org"
> Date: Sunday, September 6, 2015 at 11:28 AM
>
> To: "user@cassandra.apache.org"
> S
""It you need strong consistency and don't mind lower transaction rate,
you're better off with base""
I wish you can explain more how this statment relate to the my post?
Regards,
er, in practice this is a very rare valid use case as
> clusters doing several hundred thousand transactions per second (not
> uncommon) would find that "last timestamp" is hopelessly wrong every time
> to at best be an approximation, no matter the database technology.
>
>
Yes, that you a lot
On Tue, Sep 8, 2015 at 5:25 PM, Tyler Hobbs wrote:
>
> On Sat, Sep 5, 2015 at 8:32 AM, ibrahim El-sanosi <
> ibrahimsaba...@gmail.com> wrote:
>
>> So in this scenario, the latest data that wrote to the replicas is [K1,
>> V2] which should be the
22 matches
Mail list logo