Hello everyone,
I am testing the procedure of restoring a table using the commitlogs
without success.
I am following this doc (even if this is for DSE )
https://support.datastax.com/hc/en-us/articles/115001593706-Manual-Backup-and-Restore-with-Point-in-time-and-table-level-restore-
I am proba
considering:
row size large or not
update a lot or not - update is insert actually
read heavy or not
overall read performance
if row size large , you may consider table:user_detail , add column id in
all tables.
In application side, merge/join by id.
But paid read price, 2nd query to user_de
Hi Rahul,
the table TTL is 24 months. Oldest data is 22 months, so no
expirations yet. Compacted partition maximum bytes: 17 GB - yeah, I
know that's not good, but we'll have to wait for the TTL to make it go
away. More recent partitions are kept under 100 MB by bucketing.
The data model:
CREAT
Here are some to review and test for Cassandra 3.x from DataStax:
https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/config/configRecommendedSettings.html
Al Tobey has done extensive work in this area, too. This is dated (Cassandra
2.1), but is worth mining for information:
https:
Hi all,
I have one question about altering schema. If we only add columns, is it ok to
alter the schema while the writes to the table are happening at the same time?
We can control that the writes will not touch the new columns until the schema
change is done. Or better to stop the writes to th
This is safe (and normal, and good) in all versions except those impacted
by https://issues.apache.org/jira/browse/CASSANDRA-13004
So if you're on 2.1, 2.2, or 3.11 you're fine
If you're on 3.0 between 3.0.0 and 3.0.13, you should upgrade first (to
newest 3.0, probably 3.0.17)
If you're on a vers