Thank you for your reply.
- Repairs are not running on the cluster, in fact we've been "slacking"
when it comes to repair, mainly because we never manually delete our data
as it's always TTLed and we haven't had major failures or outages that
required repairing data (I know that's not a good reaso
>
>
> Unfortunately, these numbers still don't match at all.
>
> And yes, the cluster is in a single DC and since I am using the EC2
> snitch, replicas are AZ aware.
>
>
Are repairs running on the cluster?
Other thoughts:
- is internode_compression set to 'all' in cassandra.yaml (should be 'all'
b
I understand your point about the billing, but billing here was merely
the triggering factor that had me start analyzing the traffic in the first
place.
At the moment, I'm not considering the numbers on my bill anymore but
simply the numbers that I am measuring with iftop on each node of the
clust
Hmm. From the AWS FAQ:
*Q: If I have two instances in different availability zones, how will I be
charged for regional data transfer?*
Each instance is charged for its data in and data out. Therefore, if data
is transferred between these two instances, it is charged out for the first
instance and
It is indeed very intriguing and I really hope to learn more from the
experience of this mailing list. To address your points:
- The theory that full data is coming from replicas during reads is not
enough to explain the situation. In my scenario, over a time window I had
17.5 GB of intra node act
Intriguing. It's enough data to look like full data is coming from the
replicants instead of digests when the read of the copy occurs. Are you
doing backup/dr? Are directories copied regularly and over the network or ?
*...*
*Daemeon C.M. ReiydelleUSA (+1) 415.501.0198London (+44) (0) 20 8
Thank you for your reply.
To answer your points:
- I fully agree on the write volume, in fact my isolated tests confirm
your estimation
- About the read, I agree as well, but the volume of data is still much
higher
- I am writing to one single keyspace with RF 3, there's just one keyspace
- I
If read & write at quorum then you write 3 copies of the data then return
to the caller; when reading you read one copy (assume it is not on the
coordinator), and 1 digest (because read at quorum is 2, not 3).
When you insert, how many keyspaces get written to? (Are you using e.g.
inverted indices
Hello,
We have a Cassandra 2.1.9 cluster on EC2 for one of our live applications.
There's a total of 21 nodes across 3 AWS availability zones, c3.2xlarge
instances.
The configuration is pretty standard, we use the default settings that come
with the datastax AMI and the driver in our application