Thanks Ken, further investigating what I found is the tombstones which I am
seeing are from null values in the collection objects. Tombstones are also
inserted into when initial collection values are inserted but seems like
they are not counted towards threshold warning and do not show up in
tracin
Thanks Jeff,
I'm having gcgs set to 10 mins and changed the table ttl also to 5 hours
compared to insert ttl to 4 hours . Tracing on doesn't show any tombstone
scans for the reads. And also log doesn't show tombstone scan alerts. Has
the reads are happening 5-8k reads per node during the peak h
I’m not parsing this - did the lower gcgs help or not ? Seeing the table
histograms is the next step if this is still a problem
The table level TTL doesn’t matter if you set a TTL on each insert
--
Jeff Jirsa
> On Feb 23, 2019, at 4:37 PM, Rahul Reddy wrote:
>
> Thanks Jeff,
>
> I'm ha
Would also be good to see your schema (anonymized if needed) and the select
queries you’re running
--
Jeff Jirsa
> On Feb 23, 2019, at 4:37 PM, Rahul Reddy wrote:
>
> Thanks Jeff,
>
> I'm having gcgs set to 10 mins and changed the table ttl also to 5 hours
> compared to insert ttl to 4 h
Changing gcgs didn't help
CREATE KEYSPACE ksname WITH replication = {'class':
'NetworkTopologyStrategy', 'dc1': '3', 'dc2': '3'} AND durable_writes =
true;
```CREATE TABLE keyspace."table" (
"column1" text PRIMARY KEY,
"column2" text
) WITH bloom_filter_fp_chance = 0.01
AND caching
Rahul,
Please see this DataStax article which suggests you might be using Cassandra as
a queue-like dataset – and that’s an anti-pattern for Cassandra. It could be
you need to use a different database. It could be your data model is wrong:
https://www.datastax.com/dev/blog/cassandra-anti-p
Your schema is such that you’ll never read more than one tombstone per select
(unless you’re also doing range reads / table scans that you didn’t mention) -
I’m not quite sure what you’re alerting on, but you’re not going to have
tombstone problems with that table / that select.
--
Jeff Jirsa
Do you see anything wrong with this metric.
metric to scan tombstones
increase(cassandra_Table_TombstoneScannedHistogram{keyspace="mykeyspace",Table="tablename",function="Count"}[5m])
And sametime CPU Spike to 50% whenever I see high tombstone alert.
On Sat, Feb 23, 2019, 9:25 PM Jeff Jirsa wro
Rahul,
You wrote that during peak hours you only have a couple hundred inserts per
node so now I’m not sure why the default settings wouldn’t have worked just
fine. I sense there is more to the story. What else could explain those
tombstones?
From: Rahul Reddy [mailto:rahulreddy1...@gm
You’ll only ever have one tombstone per read, so your load is based on normal
read rate not tombstones. The metric isn’t wrong, but it’s not indicative of a
problem here given your data model.
You’re using STCS do you may be reading from more than one sstable if you
update column2 for a given
Also given your short ttl and low write rate, you may want to think about how
you can keep more in memory - this may mean larger memtable and high flush
thresholds (reading from the memtable), or perhaps the partition cache (if you
are likely to read the same key multiple times). You’ll also pro
Thanks Jeff,
Since low writes and high reads most of the time data in memtables only.
When I noticed intially issue no stables on disk everything in memtable
only.
On Sat, Feb 23, 2019, 10:01 PM Jeff Jirsa wrote:
> Also given your short ttl and low write rate, you may want to think about
> how
```jvm setting
-XX:+UseThreadPriorities
-XX:ThreadPriorityPolicy=42
-XX:+HeapDumpOnOutOfMemoryError
-Xss256k
-XX:StringTableSize=103
-XX:+AlwaysPreTouch
-XX:-UseBiasedLocking
-XX:+UseTLAB
-XX:+ResizeTLAB
-XX:+UseNUMA
-XX:+PerfDisableSharedMem
-Djava.net.preferIPv4Stack=true
-XX:+UseG1GC
-XX:G1
G1GC with an 8g heap may be slower than CMS. Also you don’t typically set new
gen size on G1.
Again though - what problem are you solving here? If you’re serving reads and
sitting under 50% cpu, it’s not clear to me what you’re trying to fix.
Tombstones scanned won’t matter for your table, so i
When the CPU utilization spikes from 5-10% to 50%, how many nodes does it
happen to at the same time?
From: Rahul Reddy [mailto:rahulreddy1...@gmail.com]
Sent: Saturday, February 23, 2019 7:26 PM
To: user@cassandra.apache.org
Subject: Re: Tombstones in memtable
```jvm setting
-XX:+Use
Reads increase on all most all nodes same is the case with CPU. it's goes
high on all nodes
On Sat, Feb 23, 2019, 11:04 PM Kenneth Brotman
wrote:
> When the CPU utilization spikes from 5-10% to 50%, how many nodes does it
> happen to at the same time?
>
>
>
> *From:* Rahul Reddy [mailto:rahulred
16 matches
Mail list logo