cassandra.apache.org>
Subject: Re: design principle to manage roll back
As one of the options, you can use (Logged) batch for kind of atomic mutations.
I said, "kind of" because it is not really atomic when mutations span
multiple partitions.
More specifically, the mutations go to all t
gt;
> From: onmstester onmstester
> Sent: 14 July 2020 08:04
> To: user
> Subject: Re: design principle to manage roll back
>
>
>
> Hi,
>
>
>
> I think that Cassandra alone is not suitable for your use case. You can use a
> mix of Distributed/NoSQL (to stori
mstester
> Sent: 14 July 2020 08:04
> To: user
> Subject: Re: design principle to manage roll back
>
>
>
> Hi,
>
>
>
> I think that Cassandra alone is not suitable for your use case. You can use a
> mix of Distributed/NoSQL (to storing single records
ent: 14 July 2020 08:04
To: user<mailto:user@cassandra.apache.org>
Subject: Re: design principle to manage roll back
Hi,
I think that Cassandra alone is not suitable for your use case. You can use a
mix of Distributed/NoSQL (to storing single records of whatever makes your
input the big
Hi,
I think that Cassandra alone is not suitable for your use case. You can use a
mix of Distributed/NoSQL (to storing single records of whatever makes your
input the big data) & Relational/Single Database (for transactional non-big
data part)
Sent using https://www.zoho.com/mail/
O
Hi
What are the design approaches I can follow to ensure that data is consistent
from an application perspective (not from individual tables perspective). I am
thinking of issues which arise due to unavailability of rollback or executing
atomic transactions in Cassandra. Is Cassandra not suitab