Those properties but 21600 is probably more aggressive than I’d use myself -
I’m not 100% sure but I suspect I’d try something over 12 hours
> On Oct 28, 2020, at 10:37 PM, Eunsu Kim wrote:
>
>
> Thank you for your response.
>
> What subproperties do you mean specifically?
>
> Currently,
Thank you for your response.
What subproperties do you mean specifically?
Currently, there are the following settings to ageressive purge.
AND COMPACTION = { 'class' : 'TimeWindowCompactionStrategy',
'compaction_window_unit' : 'HOURS', 'compaction_window_size' : 12,
'unchecked_tombstone_compac
Works but requires you to enable tombstone compaction subproperties if you
need to purge the 2w ttl data before the highest ttl time you chose
> On Oct 28, 2020, at 5:58 PM, Eunsu Kim wrote:
>
> Hello,
>
> I have a table with a default TTL(2w). I'm using TWCS(window size : 12h) on
> the rec
Hello,
I have a table with a default TTL(2w). I'm using TWCS(window size : 12h) on the
recommendation of experts. This table is quite big, high WPS.
I would like to insert data different TTL from the default in this table
according to the type of data.
About four different TTLs (4w, 6w, 8w, 10w
That particular cluster exists for archival purposes, and as such gets a
very low amount of traffic (maybe 5 queries per minute). So not
particularly helpful to answer your question :-) With that said, we've seen
in other clusters that scalability issues are much more likely to come from
hot pa
Hey,
Thanks chipping in Tomas. Could you describe what sort of workload is the big
cluster receiving in terms of local C* reads, writes and client requests as
well?
You mention repairs, how do you run them?
Gediminas
From: Tom van der Woerdt
Sent: Wednesday, October 28, 2020 14:35
To: user
Heya,
We're running version 3.11.7, can't use 3.11.8 as it won't even start
(CASSANDRA-16091). Our policy is to use LCS for everything unless there's a
good argument for a different compaction strategy (I don't think we have
*any* STCS at all other than system keyspaces). Since our nodes are mostl
Leon,
we had an awful performance/throughput experience with 3.x coming from 2.1.
3.11 is simply a memory hog, if you are using batch statements on the client
side. If so, you are likely affected by
https://issues.apache.org/jira/browse/CASSANDRA-16201
Regards,
Thomas
A few questions for you Tom if you have 30 seconds and care to disclose:
1. What version of C*?
2. What compaction strategy?
3. What's core count allocated per C* node?
4. Gossip give you any headaches / you have to be delicate there or does
it behave itself?
Context: pmc/committer
Does 360 count? :-)
num_tokens is 16, works fine (had 256 on a 300 node cluster as well, not
too many problems either). Roughly 2.5TB per node, running on-prem on
reasonably stable hardware so replacements end up happening once a week at
most, and there's no particular change needed in the automat
Hello,
I wanted to seek out your opinion and experience.
Has anyone of you had a chance to run a Cassandra cluster of more than 350
nodes?
What are the major configuration considerations that you had to focus on? What
number of vnodes did you use?
Once the cluster was up and running what would
11 matches
Mail list logo