Adam:
Thanks!
Very helpful. I will take a look.
James
On Mon, Sep 24, 2018 at 6:59 PM Adam Zegelin wrote:
> Hi James,
>
> Prometheus is the most common monitoring solution for K8s-managed
> applications.
>
> There are a number of options to get Cassandra metrics into Prometheus.
> One of
Hi James,
Prometheus is the most common monitoring solution for K8s-managed
applications.
There are a number of options to get Cassandra metrics into Prometheus.
One of which, shameless plug, is something I've been working on for the
past few months -- cassandra-exporter, a JVM agent that aims to
Hi, there:
What are latest good tools for monitoring open source cassandra ?
I was used to Datastax opscenter tool, felt all tasks quite easy. Now on
new project, open source cassandra, on Kubernetes container/docker, logs in
Splunk, feel very challenge.
Most wanted metrics are read / write
Hi, My app writes 100K rows per seconds to a C* cluster (including 30 nodes
and using version 3.11.2). There are 20 threads, each writing 10K (list size in
below code is 100K) statements using async API: for (Statement s:list) {
ResultSetFuture future = session.executeAsync(s); tasks.ad
On Mon, 24 Sep 2018, 13:08 Jeff Jirsa, wrote:
> The data structure used to know if data needs to be streamed (the merkle
> tree) is only granular to - at best - a token, so even with subrange repair
> if a byte is off, it’ll stream the whole partition, including parts of old
> repaired sstables
>
> On Sep 24, 2018, at 3:47 AM, Oleksandr Shulgin
> wrote:
>
>> On Mon, Sep 24, 2018 at 10:50 AM Jeff Jirsa wrote:
>> Do your partitions span time windows?
>
> Yes.
>
The data structure used to know if data needs to be streamed (the merkle tree)
is only granular to - at best - a token, so
On Mon, Sep 24, 2018 at 10:50 AM Jeff Jirsa wrote:
> Do your partitions span time windows?
Yes.
--
Alex
In both cases:
Do your partitions span time windows? Is there a single partition that exists
in all 800 of those sstables?
--
Jeff Jirsa
> On Sep 24, 2018, at 1:20 AM, Martin Mačura wrote:
>
> Hi,
> I can confirm the same issue in Cassandra 3.11.2.
>
> As an example: a TWCS table that no
Hi,
I can confirm the same issue in Cassandra 3.11.2.
As an example: a TWCS table that normally has 800 SSTables (2 years'
worth of daily windows plus some anticompactions) will peak at
anywhere from 15k to 50k SSTables during a subrange repair.
Regards,
Martin
On Mon, Sep 24, 2018 at 9:34 AM
Hello,
Our setup is as follows:
Apache Cassandra: 3.0.17
Cassandra Reaper: 1.3.0-BETA-20180830
Compaction: {
'class': 'TimeWindowCompactionStrategy',
'compaction_window_size': '30',
'compaction_window_unit': 'DAYS'
}
We have two column families which differ only in the
10 matches
Mail list logo