@Dorain, yes i did that by mistake. I rectified it by starting a new thread.
Thanks and regards,-- Indranil Basu
From: Dorian Hoxha
To: user@cassandra.apache.org; INDRANIL BASU
Sent: Monday, 3 October 2016 11:07 PM
Subject: Re: Way to write to dc1 but keep data only in dc2
Dorian, I don't think Cassandra is able to achieve what you want natively.
In short words, what you want to achieve is conditional data replication.
Yabin
On Mon, Oct 3, 2016 at 1:37 PM, Dorian Hoxha wrote:
> @INDRANIL
> Please go find your own thread and don't hijack mine.
>
> On Mon, Oct 3,
@INDRANIL
Please go find your own thread and don't hijack mine.
On Mon, Oct 3, 2016 at 6:19 PM, INDRANIL BASU wrote:
> Hello All,
>
> I am getting the below error repeatedly in the system log of C* 2.1.0
>
> WARN [SharedPool-Worker-64] 2016-09-27 00:43:35,835
> SliceQueryFilter.java:236 - Read
Hello All,
I am getting the below error repeatedly in the system log of C* 2.1.0
WARN [SharedPool-Worker-64] 2016-09-27 00:43:35,835 SliceQueryFilter.java:236
- Read 0 live and 1923 tombstoned cells in test_schema.test_cf.test_cf_col1_idx
(see tombstone_warn_threshold). 5000 columns was reques
Thanks for the explanation Eric.
I would think it as something like:
The keyspace will be on dc1 + dc2, with the option that no long-term-data
is in dc1. So you write to dc1 (to the right nodes), they write to
commit-log/memtable and when they push for inter-dc-replication dc1 then
deletes local d
It sounds like you're trying to avoid the latency of waiting for a write
confirmation to a remote data center?
App ==> DC1 ==high-latency==> DC2
If you need the write to be confirmed before you consider the write
successful in your application (definitely recommended unless you're ok
with losing
Thanks Edward. Looks like it's not possible what I really wanted (to use
some kind of a quorum write ex).
Note that the queue is ordered, but I need just so they eventually happen,
but with more consistency than ANY (2 nodes or more).
On Fri, Sep 30, 2016 at 12:25 AM, Edward Capriolo
wrote:
> Y
You can do something like this, though your use of terminology like "queue"
really do not apply.
You can setup your keyspace with replication in only one data center.
CREATE KEYSPACE NTSkeyspace WITH REPLICATION = { 'class' :
'NetworkTopologyStrategy', 'dc2' : 3 };
This will make the NTSkeyspace
I have dc1 and dc2.
I want to keep a keyspace only on dc2.
But I only have my app on dc1.
And I want to write to dc1 (lower latency) which will not keep data locally
but just push it to dc2.
While reading will only work for dc2.
Since my app is mostly write, my app ~will be faster while not having