.0
>>>>>>>>> Bloom filter false positives: 1
>>>>>>>>> Bloom filter false ratio: 0.0
>>>>>>>>> Bloom filter space used: 23888
>>>>>>>&
50% 0.00 20.50 17.08
>>>> 86 1
>>>> 75% 0.00 24.60 20.50
>>>> 124 1
>>>> 95% 0.00 35.43 29.52
>>>&g
gt; *From:* Rahul Reddy [mailto:rahulreddy1...@gmail.com]
> *Sent:* Saturday, February 23, 2019 7:26 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Tombstones in memtable
>
>
>
> ```jvm setting
>
>
>
> -XX:+UseThreadPriorities
>
> -XX:ThreadPriorityPolicy=42
>
>
When the CPU utilization spikes from 5-10% to 50%, how many nodes does it
happen to at the same time?
From: Rahul Reddy [mailto:rahulreddy1...@gmail.com]
Sent: Saturday, February 23, 2019 7:26 PM
To: user@cassandra.apache.org
Subject: Re: Tombstones in memtable
```jvm setting
-XX
>>>> Maximum tombstones per slice (last five minutes): 1
>>>>>>> Dropped Mutations: 0
>>>>>>>
>>>>>>> histograms
>>>>>>> Percentile SSTab
8.24 5.72
>>> 73 0
>>> Max 1.00 42.51152.32
>>> 124 1
>>> ```
>>>
>>> 3 node in dc1 and 3 node in dc2 cluster. With instanc type aws ec2
>>> m4.xlarge
queries you’re running
>>>
>>>
>>> --
>>> Jeff Jirsa
>>>
>>>
>>> On Feb 23, 2019, at 4:37 PM, Rahul Reddy
>>> wrote:
>>>
>>> Thanks Jeff,
>>>
>>> I'm having gcgs set to 10 mins and changed
51 51.01 124
>>>> 1
>>>> Min 0.00 8.24 5.7273
>>>> 0
>>>> Max 1.00 42.51152.32 124
>>>
cluster. With instanc type aws ec2
>>> m4.xlarge
>>>
>>>> On Sat, Feb 23, 2019, 7:47 PM Jeff Jirsa wrote:
>>>> Would also be good to see your schema (anonymized if needed) and the
>>>> select queries you’re running
>>>>
>>>>
>>>> --
>>
...@gmail.com]
Sent: Saturday, February 23, 2019 5:56 PM
To: user@cassandra.apache.org
Subject: Re: Tombstones in memtable
Changing gcgs didn't help
CREATE KEYSPACE ksname WITH replication = {'class': 'NetworkTopologyStrategy',
'dc1': '3&
any tombstone
>> scans for the reads. And also log doesn't show tombstone scan alerts. Has
>> the reads are happening 5-8k reads per node during the peak hours it shows
>> 1M tombstone scans count per read.
>>
>> On Fri, Feb 22, 2019, 11:46 AM Jeff Jirsa wrote:
2, 2019, 11:46 AM Jeff Jirsa wrote:
>>>> If all of your data is TTL’d and you never explicitly delete a cell
>>>> without using s TTL, you can probably drop your GCGS to 1 hour (or less).
>>>>
>>>> Which compaction strategy are you using? You need
-patterns-queues-and-queue-like-datasets
Kenneth Brotman
From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Saturday, February 23, 2019 4:47 PM
To: user@cassandra.apache.org
Subject: Re: Tombstones in memtable
Would also be good to see your schema (anonymized if needed) and the select
n to grab sstables just because they’re full of
>> tombstones which will probably help you.
>>
>>
>> --
>> Jeff Jirsa
>>
>>
>> On Feb 22, 2019, at 8:37 AM, Kenneth Brotman <
>> kenbrot...@yahoo.com.invalid> wrote:
>>
>> Can
gt; On Feb 22, 2019, at 8:37 AM, Kenneth Brotman
>>> wrote:
>>>
>>> Can we see the histogram? Why wouldn’t you at times have that many
>>> tombstones? Makes sense.
>>>
>>>
>>>
>>> Kenneth Brotman
>>>
>
sense.
>>>
>>>
>>>
>>> Kenneth Brotman
>>>
>>>
>>>
>>> From: Rahul Reddy [mailto:rahulreddy1...@gmail.com]
>>> Sent: Thursday, February 21, 2019 7:06 AM
>>> To: user@cassandra.apache.org
>>> Subj
gt; On Feb 22, 2019, at 8:37 AM, Kenneth Brotman
> wrote:
>
> Can we see the histogram? Why wouldn’t you at times have that many
> tombstones? Makes sense.
>
>
>
> Kenneth Brotman
>
>
>
> *From:* Rahul Reddy [mailto:rahulreddy1...@gmail.com
> ]
> *Sent
.
>
> Kenneth Brotman
>
> From: Rahul Reddy [mailto:rahulreddy1...@gmail.com]
> Sent: Thursday, February 21, 2019 7:06 AM
> To: user@cassandra.apache.org
> Subject: Tombstones in memtable
>
> We have small table records are about 5k .
> All the inserts comes as 4hr
Can we see the histogram? Why wouldn’t you at times have that many tombstones?
Makes sense.
Kenneth Brotman
From: Rahul Reddy [mailto:rahulreddy1...@gmail.com]
Sent: Thursday, February 21, 2019 7:06 AM
To: user@cassandra.apache.org
Subject: Tombstones in memtable
We have small
We have small table records are about 5k .
All the inserts comes as 4hr ttl and we have table level ttl 1 day and gc
grace seconds has 3 hours. We do 5k reads a second during peak load.
During the peak load seeing Alerts for tomstone scanned histogram reaching
million.
Cassandra version 3.11.1. Pl
20 matches
Mail list logo