If it’s a legacy write table why does it write 10% of the time? Maybe it’s the
design of the big legacy table you mentioned. It could be so many things.
Is it the same time of day?
Same days of the week or month?
Are there analytics run at that time?
What are you using for monitori
We migrated one of the application from on-Prem to aws; the queries are very
light, more like registration info;
Queries from the new app is via pk of data type, “text”, no cc (this table has
about 200 rows; however the legacy table (more like reference table) has
several million rows, about 80
Do you have that many queries? You could just review them and your data model
to see if there was an error of some kind. How long has it been happening?
What changed since it started happening?
Kenneth Brotman
From: Subroto Barua [mailto:sbarua...@yahoo.com.INVALID]
Sent: Friday, Febr
Vnode is 256
C*: 3.0.15 on m4.4xlarge gp2 vol
There are 2 more DCs on bare metal (raid 10 and older machines) attached to
this cluster and we have not seen this behavior on on-prem servers
If this event is triggered by some bad query/queries, what is the best way to
trap it?
Subroto
> On Fe
If you had a query that went across the partitions and especially if you had
vNodes set high, that would do it.
Kenneth Brotman
From: Subroto Barua [mailto:sbarua...@yahoo.com.INVALID]
Sent: Friday, February 01, 2019 8:45 AM
To: User cassandra.apache.org
Subject: Help with sudden spike in
In our production cluster, we observed sudden spike (over 160 MB/s) in read
requests on *all* Cassandra nodes for a very short period (less than a min);
this event happens few times a day.
I am not able to get to the bottom of this issue, nothing interesting in
system.log or from app level; repa
Those aren’t the project docs, they’re datastax’s docs, but that line makes no
sense.
I assume they meant that once a column reaches its TTL it is treated as a
tombstone. That’s per column and not the entire table.
--
Jeff Jirsa
> On Feb 1, 2019, at 1:47 AM, Enrico Cavallin wrote:
>
> Hi
> On Feb 1, 2019, at 1:51 AM, Troels Arvin wrote:
>
> Hello,
>
> I think I understand why one needs to regularly run "nodetool repair" on
> normal Cassandra installations with more than one node.
For people following along at home, the reason is to make sure tombstones make
it to all host
Hello,
I think I understand why one needs to regularly run "nodetool repair" on
normal Cassandra installations with more than one node.
But am I right about the following?
In single-node Cassandra installations, it is irrelevant to run
"nodetool repair" cron jobs.
--
Regards,
Troels Arvin
Hi all,
I cannot understand what this statement means:
<>
in https://docs.datastax.com/en/dse/6.7/cql/cql/cql_using/useExpire.html
I have already done some tests with TTL set on columns, rows and default on
table and all seems in line with the logic: an already written row/value
maintains its TT
10 matches
Mail list logo