Re: Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Gábor Auth
ready an open and fresh Jira ticket about it: https://issues.apache.org/jira/browse/CASSANDRA-19383 Bye, Gábor AUTH

Re: Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Gábor Auth
ee with you about it. But it looks like a strange and interesting issue... the affected table has only ~1300 rows and less than 200 kB data. :) Also, I found a same issue: https://dba.stackexchange.com/questions/325140/single-node-failure-in-cassandra-4-0-7-causes-cluster-to-run-into-high-cpu Bye,

Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Gábor Auth
calRunnable.run(FastThreadLocalRunnable.java:30) [cassandra-dc03-1] at java.base/java.lang.Thread.run(Unknown Source) -- Bye, Gábor AUTH

Re: Change num_tokens in a live cluster

2024-05-16 Thread Gábor Auth
my current case there are only 4 nodes, with a total of maybe ~25GB of data. So, creation of a new DC is more hassle for me than replace nodes one-by-one. My question was whether there is a simpler solution. And it looks like there is no... :( Bye, Gábor AUTH

Re: Change num_tokens in a live cluster

2024-05-16 Thread Gábor Auth
data on one-one node and it has only 4 nodes, so, I name it very small. :) Bye, Gábor AUTH

Re: Change num_tokens in a live cluster

2024-05-16 Thread Gábor Auth
needs to > copy data once, and can copy from/to multiple nodes concurrently, therefore > is significantly faster, at the cost of doubling the number of nodes > temporarily. > For me it's easier the replacement of nodes one-by-one in the same DC, so that, no any new technique... :) Thanks, Gábor AUTH

Change num_tokens in a live cluster

2024-05-16 Thread Gábor Auth
Hi. Is there a newer/easier workflow to change num_tokens in an existing cluster than add a new node to the cluster with the other num_tokens value and decommission an old one, repeat and rinse through all nodes? -- Bye, Gábor AUTH

Re: Cassandra nightly process

2023-01-16 Thread Gábor Auth
r metrics about your VPSs (CPU, memory, load, IO stat, disk throughput, network traffic, etc.)? I think, some process (on another virtual machine or host) steals your resources and your Cassandra cannot process the request and the other instance need to put data to hints. -- Bye, Gábor Auth

Re: Change IP address (on 3.11.14)

2022-12-06 Thread Gábor Auth
Hi, On Tue, Dec 6, 2022 at 12:41 PM Lapo Luchini wrote: > I'm trying to change IP address of an existing live node (possibly > without deleting data and streaming terabytes all over again) following > these steps: https://stackoverflow.com/a/57455035/166524 > 1. echo 'auto_bootstrap: false' >>

Re: TWCS repair and compact help

2021-06-29 Thread Gábor Auth
Hi, On Tue, Jun 29, 2021 at 12:34 PM Erick Ramirez wrote: > You definitely shouldn't perform manual compactions -- you should let the > normal compaction tasks take care of it. It is unnecessary to manually run > compactions since it creates more problems than it solves as I've explained > in th

Re: Last stored value metadata table

2020-11-10 Thread Gábor Auth
Hi, On Tue, Nov 10, 2020 at 6:29 PM Alex Ott wrote: > What about using "per partition limit 1" on that table? > Oh, it is almost a good solution, but actually the key is ((epoch_day, name), timestamp), to support more distributed partitioning, so... it is not good... :/ -- Bye, Auth Gábor (h

Re: Last stored value metadata table

2020-11-10 Thread Gábor Auth
Hi, On Tue, Nov 10, 2020 at 5:29 PM Durity, Sean R wrote: > Updates do not create tombstones. Deletes create tombstones. The above > scenario would not create any tombstones. For a full solution, though, I > would probably suggest a TTL on the data so that old/unchanged data > eventually gets re

Re: Last stored value metadata table

2020-11-10 Thread Gábor Auth
Hi, On Tue, Nov 10, 2020 at 3:18 PM Durity, Sean R wrote: > My answer would depend on how many “names” you expect. If it is a > relatively small and constrained list (under a few hundred thousand), I > would start with something like: > At the moment, the number of names is more than 10,000 but

Last stored value metadata table

2020-11-09 Thread Gábor Auth
Hi, Short story: storing time series of measurements (key(name, timestamp), value). The problem: get the list of the last `value` of every `name`. Is there a Cassandra friendly solution to store the last value of every `name` in a separate metadata table? It will come with a lot of tombstones...

Re: Cassandra Delete vs Update

2020-05-23 Thread Gábor Auth
Hi, On Sat, May 23, 2020 at 6:26 PM Laxmikant Upadhyay wrote: > Thanks you so much for quick response. I completely agree with Jeff and > Gabor that it is an anti-pattern to build queue in Cassandra. But plan is > to reuse the existing Cassandra infrastructure without any additional cost > (lik

Re: Cassandra Delete vs Update

2020-05-23 Thread Gábor Auth
Hi, On Sat, May 23, 2020 at 4:09 PM Laxmikant Upadhyay wrote: > I think that we should avoid tombstones specially row-level so should go > with option-1. Kindly suggest on above or any other better approach ? > Why don't you use a queue implementation, like AcitiveMQ, Kafka and something? Cassa

Re: Schema disagreement

2018-05-01 Thread Gábor Auth
Hi, On Tue, May 1, 2018 at 10:27 PM Gábor Auth wrote: > One or two years ago I've tried the CDC feature but switched off... maybe > is it a side effect of switched off CDC? How can I fix it? :) > Okay, I've worked out. Updated the schema of the affected keyspaces on the

Re: Schema disagreement

2018-05-01 Thread Gábor Auth
Hi, On Tue, May 1, 2018 at 7:40 PM Gábor Auth wrote: > What can I do? Any suggestion? :( > Okay, I've diffed the good and the bad system_scheme tables. The only difference is the `cdc` field in three keyspaces (in `tables` and `views`): - the value of `cdc` field on the good nod

Re: Schema disagreement

2018-05-01 Thread Gábor Auth
Hi, On Mon, Apr 30, 2018 at 11:11 PM Gábor Auth wrote: > On Mon, Apr 30, 2018 at 11:03 PM Ali Hubail > wrote: > >> What steps have you performed to add the new DC? Have you tried to follow >> certain procedures like this? >> >> https://docs.datastax.com/en/c

Re: Schema disagreement

2018-04-30 Thread Gábor Auth
Hi, On Mon, Apr 30, 2018 at 11:03 PM Ali Hubail wrote: > What steps have you performed to add the new DC? Have you tried to follow > certain procedures like this? > > https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddDCToCluster.html > Yes, exactly. :/ Bye, Gábor Auth

Re: Schema disagreement

2018-04-30 Thread Gábor Auth
Hi, On Mon, Apr 30, 2018 at 11:39 AM Gábor Auth wrote: > 've just tried to add a new DC and new node to my cluster (3 DCs and 10 > nodes) and the new node has a different schema version: > Is it normal? Node is marked down but doing a repair successfully? WARN [MigrationStage

Schema disagreement

2018-04-30 Thread Gábor Auth
- cluster restart (node-by-node) The MigrationManager constantly running on the new node and try to migrate schema: DEBUG [NonPeriodicTasks:1] 2018-04-30 09:33:22,405 MigrationManager.java:125 - submitting migration task for /x.x.x.x What also can I do? :( Bye, Gábor Auth

Re: Cassandra vs MySQL

2018-03-12 Thread Gábor Auth
ndra is not a 'drop in' replacement of MySQL. Maybe it will be faster, maybe it will be totally unusable, based on your use-case and database scheme. Is there some good more recent material? > Are you able to completely redesign your database schema? :) Bye, Gábor Auth

Re: Materialized Views marked experimental

2017-10-27 Thread Gábor Auth
withdrawn (the issue can't be fixable)? :) Bye, Gábor Auth

Re: Alter table gc_grace_seconds

2017-10-04 Thread Gábor Auth
Hi, On Wed, Oct 4, 2017 at 8:39 AM Oleksandr Shulgin < oleksandr.shul...@zalando.de> wrote: > If you have migrated ALL the data from the old CF, you could just use > TRUNCATE or DROP TABLE, followed by "nodetool clearsnapshot" to reclaim the > disk space (this step has to be done per-node). > Un

Re: Alter table gc_grace_seconds

2017-10-02 Thread Gábor Auth
dra cassandra 32843468 Oct 2 19:15 mc-48435-big-Data.db -rw-r--r-- 1 cassandra cassandra 24734857 Oct 2 19:53 mc-48440-big-Data.db Two of them untouched and one rewritten with the same content. :/ Bye, Gábor Auth

Re: Alter table gc_grace_seconds

2017-10-02 Thread Gábor Auth
t Now check both the list results. If they have some common sstables then we > can say that C* is not compacting sstables. > Yes, exactly. How can I fix it? Bye, Gábor Auth

Re: Alter table gc_grace_seconds

2017-10-02 Thread Gábor Auth
xperience zombie > data (i.e. data that was previously deleted coming back to life) > It is a test cluster with test keyspaces. :) Bye, Gábor Auth >

Re: Alter table gc_grace_seconds

2017-10-01 Thread Gábor Auth
AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99PERCENTILE'; cassandra@cqlsh:mat> select gc_grace_seconds from system_schema.tables where keyspace_name = 'mat' and table_name = 'number_item'; gc_grace_seconds -- 3600 (1 rows) Bye, Gábor Auth

Re: Alter table gc_grace_seconds

2017-10-01 Thread Gábor Auth
t; I've tried the test case that you described and it is works (the compact removed the marked_deleted rows) on a newly created CF. But the same gc_grace_seconds settings has no effect in the `number_item` CF (millions of rows has been deleted during a last week migration). Bye, Gábor Auth

Re: Alter table gc_grace_seconds

2017-10-01 Thread Gábor Auth
(with 4-4 nodes). Bye, Gábor Auth

Re: Alter table gc_grace_seconds

2017-10-01 Thread Gábor Auth
Hi, On Sun, Oct 1, 2017 at 6:53 PM Jonathan Haddad wrote: > The TTL is applied to the cells on insert. Changing it doesn't change the > TTL on data that was inserted previously. > Is there any way to purge out these tombstoned data? Bye, Gábor Auth

Re: Alter table gc_grace_seconds

2017-10-01 Thread Gábor Auth
he repair will be remove it. Am I right? Bye, Gábor Auth

Re: Alter table gc_grace_seconds

2017-10-01 Thread Gábor Auth
on" : 146160, "clustering" : [ "humidity", "97781fd0-9dab-11e7-a3d5-7f6ef9a844c7" ], "deletion_info" : { "marked_deleted" : "2017-09-25T11:51:19.165276Z", "local_delete_time" : "2017-09-25T11:51:19Z" }, "cells" : [ ] } How can I purge these old rows? :) I've tried: compact, scrub, cleanup, clearsnapshot, flush and full repair. Bye, Gábor Auth

Alter table gc_grace_seconds

2017-10-01 Thread Gábor Auth
Hi, The `alter table number_item with gc_grace_seconds = 3600;` is sets the grace seconds of tombstones of the future modification of number_item column family or affects all existing data? Bye, Gábor Auth

Re: Purge data from repair_history table?

2017-03-20 Thread Gábor Auth
DAYS', 'compaction_window_size':'1' } AND default_time_to_live = 2592000; Is it affect the previous contents in the table or I need to truncate manually? Is the 'TRUNCATE' safe? :) Bye, Gábor Auth

Re: Purge data from repair_history table?

2017-03-17 Thread Gábor Auth
RA-12701). > > 2017-03-17 8:36 GMT-03:00 Gábor Auth : > > Hi, > > I've discovered a relative huge size of data in the system_distributed > keyspace's repair_history table: >Table: repair_history >Space used (live): 389409804 >

Re: Slow repair

2017-03-17 Thread Gábor Auth
Hi, On Wed, Mar 15, 2017 at 11:35 AM Ben Slater wrote: > When you say you’re running repair to “rebalance” do you mean to populate > the new DC? If so, the normal/correct procedure is to use nodetool rebuild > rather than repair. > Oh, thank you! :) Bye, Gábor Auth >

Purge data from repair_history table?

2017-03-17 Thread Gábor Auth
method to purge? :) Bye, Gábor Auth

Slow repair

2017-03-15 Thread Gábor Auth
40] Repair session aae06160-0943-11e7-9c1f-f5ba092c6aea for range [(-7542303048667795773,-7300899534947316960]] finished (progress: 34%) [2017-03-15 06:03:17,786] Repair completed successfully [2017-03-15 06:03:17,787] Repair command #4 finished in 10 minutes 39 seconds Bye, Gábor Auth

Re: Archive node

2017-03-06 Thread Gábor Auth
ation from > Production. Plus no operational overhead. > I think, this is also an operational overhead... :) Bye, Gábor Auth

Archive node

2017-03-06 Thread Gábor Auth
x27; and change the replication factor of old keyspaces from {'class': 'NetworkTopologyStrategy', 'DC01': '3', 'DC02': '3'} to {'class': 'NetworkTopologyStrategy', 'Archive': '1'}, and repair the keyspace. What do you think? Any other idea? :) Bye, Gábor Auth

Re: Cassandra version numbering

2017-02-24 Thread Gábor Auth
Hi, On Thu, Feb 23, 2017 at 10:59 PM Rakesh Kumar wrote: > Is ver 3.0.10 same as 3.10. > No. As far as I know the 3.0.x is "LTS" release with only bug and security fixes, the 3.x versions are alternated feature and bug fix releases. Bye, Gábor AUTH

Upgrade from 3.6 to 3.9

2016-10-28 Thread Gábor Auth
Java heap space). Bye, Gábor Auth