Hi,
I have been using lightweight transactions for several months now and
wondering what is the benefit of having LOCAL_SERIAL serial consistency level.
With SERIAL, it achieves global linearlizability,
but with LOCAL_SERIAL, it only achieves DC-local linearlizability,
which is missing point of l
The reason you don't want to use SERIAL in multi-DC clusters is the
prohibitive cost of lightweight transaction (in term of latency),
especially if your data centers are separated by continents. A ping from
London to New York takes 52ms just by speed of light in optic cable. Since
LightWeight Trans
On Wed, Dec 7, 2016 at 8:25 AM, DuyHai Doan wrote:
> The reason you don't want to use SERIAL in multi-DC clusters is the
> prohibitive cost of lightweight transaction (in term of latency),
> especially if your data centers are separated by continents. A ping from
> London to New York takes 52ms j
On Tue, Dec 6, 2016 at 9:54 AM, Aleksandr Ivanov wrote:
> I'm trying to decommission one C* node from 6 nodes cluster and see that
> outbound network traffic on this node doesn't go over ~30Mb/s.
> Looks like it is throttled somewhere in C*
Do you use compression? Try taking a thread dump and se
Maybe your System cannot Stream faster. Is your cpu or hd/ssd fully
utilized?
Am 07.12.2016 16:07 schrieb "Eric Evans" :
> On Tue, Dec 6, 2016 at 9:54 AM, Aleksandr Ivanov wrote:
> > I'm trying to decommission one C* node from 6 nodes cluster and see that
> > outbound network traffic on this nod
Should've mentioned - running 3.9. Also - please do not recommend MVs: I
tried, they're broken, we punted.
On Wed, Dec 7, 2016 at 10:06 AM, Voytek Jarnot
wrote:
> The low default value for batch_size_warn_threshold_in_kb is making me
> wonder if I'm perhaps approaching the problem of atomicity
Could you please be more specific?
Am 07.12.2016 17:10 schrieb "Voytek Jarnot" :
> Should've mentioned - running 3.9. Also - please do not recommend MVs: I
> tried, they're broken, we punted.
>
> On Wed, Dec 7, 2016 at 10:06 AM, Voytek Jarnot
> wrote:
>
>> The low default value for batch_size_w
The low default value for batch_size_warn_threshold_in_kb is making me
wonder if I'm perhaps approaching the problem of atomicity in a non-ideal
fashion.
With one data set duplicated/denormalized into 5 tables to support queries,
we use batches to ensure inserts make it to all or 0 tables. This w
Sure, about which part?
default batch size warning is 5kb
I've increased it to 30kb, and will need to increase to 40kb (8x default
setting) to avoid WARN log messages about batch sizes. I do realize it's
just a WARNing, but may as well avoid those if I can configure it out.
That said, having to i
I meant the mv thing
Am 07.12.2016 17:27 schrieb "Voytek Jarnot" :
> Sure, about which part?
>
> default batch size warning is 5kb
> I've increased it to 30kb, and will need to increase to 40kb (8x default
> setting) to avoid WARN log messages about batch sizes. I do realize it's
> just a WARNin
Been about a month since I have up on it, but it was very much related to
the stuff you're dealing with ... Basically Cassandra just stepping on its
own er, tripping over its own feet streaming MVs.
On Dec 7, 2016 10:45 AM, "Benjamin Roth" wrote:
> I meant the mv thing
>
> Am 07.12.2016
Ok thanks. Im investingating a Lot. There will be some improvements coming
but cannot promise if it will solve All existing problems. We will see and
keep working on it.
Am 07.12.2016 17:58 schrieb "Voytek Jarnot" :
> Been about a month since I have up on it, but it was very much related to
> the
I have been circling around a thought process over batches. Now that
Cassandra has aggregating functions, it might be possible write a type of
record that has an END_OF_BATCH type marker and the data can be suppressed
from view until it was all there.
IE you write something like a checksum record
@Ed, what you just said reminded me a lot of RAMP transactions. I did a
blog post on it here: http://rustyrazorblade.com/2015/11/ramp-made-easy/
I've been considering doing a follow up on how to do a Cassandra data model
enabling RAMP transactions, but that takes time, and I have almost zero of
t
Hi Voytek,
I think the way you are using it is definitely the canonical way.
Unfortunately, as you learned, there are some gotchas. We tried
substantially increasing the batch size and it worked for a while, until we
reached new scale, and we increased it again, and so forth. It works, but
soon you
Appreciate the long writeup Cody.
Yeah, we're good with temporary inconsistency (thankfully) as well. I'm
going to try to ride the batch train and hope it doesn't derail - our load
is fairly static (or, more precisely, increase in load is fairly slow and
can be projected).
Enjoyed your two-phase
There is a disconnect between write.3 and write.4, but it can only affect
performance, not consistency. The presence or absence of a row's txnUUID in
the IncompleteTransactions table is the ultimate source of truth, and rows
whose txnUUID are not null will be checked against that truth in the read
I have a couple of SSTables that are humongous
-rw-r--r-- 1 user group 138933736915 Dec 1 03:41
lb-29677471-big-Data.db-rw-r--r-- 1 user group 78444316655 Dec 1 03:58
lb-29677495-big-Data.db-rw-r--r-- 1 user group 212429252597 Dec 1 08:20
lb-29678145-big-Data.db
sstablemetadata reports that
This can happen as part of node bootstrap,repair or rebuild node.
From: Sotirios Delimanolis
Sent: Wednesday, December 7, 2016 4:35:45 PM
To: User
Subject: Huge files in level 1 and level 0 of LeveledCompactionStrategy
I have a couple of SSTables that are humongo
We haven't done any of those recently, on any nodes in this cluster. Would a
major compaction through 'nodetool compact' cause this? (I think I may have
done one of those.)
On Wednesday, December 7, 2016 4:40 PM, Harikrishnan Pillai
wrote:
#yiv9834340812 #yiv9834340812 -- P
{margin-t
What is the CQL data type I should use for long? I have to create a column
with long data type. Cassandra version is 2.0.10.
CREATE TABLE storage (
key text,
clientid int,
deviceid long, // this is wrong I guess as I don't see long in CQL?
PRIMARY KEY (topic, partition)
use `bigint` for long.
Regards,
Varun Barala
On Thu, Dec 8, 2016 at 10:32 AM, Check Peck wrote:
> What is the CQL data type I should use for long? I have to create a column
> with long data type. Cassandra version is 2.0.10.
>
> CREATE TABLE storage (
> key text,
> clientid in
And then from datastax java driver, I can use. Am I right?
To Read:
row.getLong();
To write
boundStatement.setLong()
On Wed, Dec 7, 2016 at 6:50 PM, Varun Barala
wrote:
> use `bigint` for long.
>
>
> Regards,
> Varun Barala
>
> On Thu, Dec 8, 2016 at 10:32 AM, Check Peck
> wrote:
>
>> What
Hi DuyHai,
Thank you for the comments.
Yes, that's exactly what I mean.
(Your comment is very helpful to support my opinion.)
As you said, SERIAL with multi-DCs incurs latency increase,
but it's a trade-off between latency and high availability bacause one
DC can be down from a disaster.
I don't
24 matches
Mail list logo