Have you tried enabling 'unchecked_tombstone_compaction' on the affected
tables?
On Tue, Mar 26, 2019 at 5:01 AM Nick Hatfield
wrote:
> How does one properly rid of sstables that have fallen victim to
> overlapping timestamps? I realized that we had TWCS set in our CF which
> also had a read_rep
Thanks for the insight, Rahul. We’re using 1 day for the time window.
compaction = {'class':
'com.jeffjirsa.cassandra.db.compaction.TimeWindowCompactionStrategy',
'compaction_window_size': '1',
'compaction_window_unit': 'DAYS',
'max_threshold': '32',
'min_threshold': '4',
'timestamp_res
Or Upgrade to a version with
https://issues.apache.org/jira/browse/CASSANDRA-13418 and enable that feature
--
Jeff Jirsa
> On Mar 26, 2019, at 6:23 PM, Rahul Singh wrote:
>
> What's your timewindow? Roughly how much data is in each window?
>
> If you examine the sstable data and see that
What's your timewindow? Roughly how much data is in each window?
If you examine the sstable data and see that is truly old data with little
chance that it has any new data, you can just remove the SStables. You can
do a rolling restart -- take down a node, remove mc-254400-* and then start
it up.
- the AWS people say EIPs are a PITA.
- if we hardcode the global IPs in the yaml, then yaml editing is required
for the occaisional hard instance reboot in aws and its attendant global ip
reassignment
- if we try leaving broadcast_rpc_address blank, null , or commented out
with rpc_address set to
On Tue, Mar 26, 2019 at 5:49 PM Carl Mueller
wrote:
> Looking at the code it appears it shouldn't matter what we set the yaml
> params to. The Ec2MultiRegionSnitch should be using the aws metadata
> 169.254.169.254 to pick up the internal/external ips as needed.
>
This is somehow my expectation
Looking at the code it appears it shouldn't matter what we set the yaml
params to. The Ec2MultiRegionSnitch should be using the aws metadata
169.254.169.254 to pick up the internal/external ips as needed.
I think I'll just have to dig in to the code differences between 2.1 and
2.2. We don't want t
How does one properly rid of sstables that have fallen victim to overlapping
timestamps? I realized that we had TWCS set in our CF which also had a
read_repair = 0.1 and after correcting this to 0.0 I can clearly see the
affects over time on the new sstables. However, I still have old sstables t
In my experience,
I'd use two methods to make sure that you are covering your ass.
1. "old school" methodology would be to do the SStable load from old to new
cluster -- if you do incremental snapshots, then you could technically
minimize downtime and just load the latest increments with a little
On Mon, Mar 25, 2019 at 11:13 PM Carl Mueller
wrote:
>
> Since the internal IPs are given when the client app connects to the
> cluster, the client app cannot communicate with other nodes in other
> datacenters.
>
Why should it? The client should only connect to its local data center and
leave
10 matches
Mail list logo