My cluster column is the time series timestamp, so basically sourceId, metric
type for partition key and timestamp for the clustering key the rest of the
fields are just values outside of the primary key. Our reads request are simply
give me values for a time range of a specific sourceId,Metric
The obvious conclusion is to say that the nodes can't keep up so it would
be interesting to know how often you're issuing the counter updates. Also,
how are the commit log disks performing on the nodes? If you have
monitoring in place, check the IO stats/metrics. And finally, review the
logs on the
I'm getting a lot of the following errors during ingest of data:
com.datastax.oss.driver.api.core.servererrors.WriteTimeoutException:
Cassandra timeout during COUNTER write query at consistency ONE (1
replica were required but only 0 acknowledged the write)
at
com.datastax.oss.driver.a
Inline
On Tue, Sep 14, 2021 at 11:47 AM Isaeed Mohanna wrote:
> Hi Jeff
>
> My data is partitioned by a sourceId and metric, a source is usually
> active up to a year after which there is no additional writes for the
> partition, and reads become scarce, so although this is not an explicit
> tim
Hi Jeff
My data is partitioned by a sourceId and metric, a source is usually active up
to a year after which there is no additional writes for the partition, and
reads become scarce, so although this is not an explicit time component, its
time based, will that suffice?
If I use a week bucket w
Thanks Eric for the update.
On Tue, 14 Sept 2021 at 16:50, Erick Ramirez
wrote:
> You'll need to write an ETL app (most common case is with Spark) to scan
> through the existing data and update it with a new TTL. You'll need to make
> sure that the ETL job is throttled down so it doesn't overloa
On Tue, Sep 14, 2021 at 5:42 AM Isaeed Mohanna wrote:
> Hi
>
> I have a table that stores time series data, the data is not TTLed since
> we want to retain the data for the foreseeable future, and there are no
> updates or deletes. (deletes could happens rarely in case some scrambled
> data reach
Hi
I have a table that stores time series data, the data is not TTLed since we
want to retain the data for the foreseeable future, and there are no updates or
deletes. (deletes could happens rarely in case some scrambled data reached the
table, but its extremely rare).
Usually we do constant wri
You'll need to write an ETL app (most common case is with Spark) to scan
through the existing data and update it with a new TTL. You'll need to make
sure that the ETL job is throttled down so it doesn't overload your
production cluster. Cheers!
>
HI all,
1. I have a table with default_time_to_live = 31536000 (1 year) . We want
it to reduce the value to 7884000 (3 months).
If we alter the table , is there a way to update the existing data?
1. I have a table without TTL we want to add TTL = 7884000 (3 months) on
the table.
If we alter the
10 matches
Mail list logo