And btw I've been assuming your reads are running without those 250k
column inserts going at the same time. It would be difficult to see
what's going on if you have both of those traffic patterns at the same
time.
--
/ Peter Schuller
And btw you can also directly read out the average wait time column
and average latency column of iostat too, to confirm that individual
i/o requests are taking longer than on a non-saturated drive.
--
/ Peter Schuller
> Actually when I run 2 stress clients in parallel I see Read Latency stay the
> same. I wonder if cassandra is reporting accurate nos.
Or you're just bottlenecking on something else. Are you running the
extra stress client on different machines for example, so that the
client isn't just saturatin
Actually when I run 2 stress clients in parallel I see Read Latency stay the
same. I wonder if cassandra is reporting accurate nos.
I understand your analogy but for some reason I don't see that happening
with the results I am seeing with multiple stress clients running. So I am
just confused wher
> I still don't understand. You would expect read latency to increase
> drastically when it's fully saturated and lot of READ drop messages also,
> correct? I don't see that in cfstats or system.log which I don't really
> understand why.
No. With a fixed concurrency there is only so many outstandi
I still don't understand. You would expect read latency to increase
drastically when it's fully saturated and lot of READ drop messages also,
correct? I don't see that in cfstats or system.log which I don't really
understand why.
--
View this message in context:
http://cassandra-user-incubator-ap
> But read latency is still something like 30ms which I would think would be
> much higher if it's saturated.
No. You're using stress, so you have some total cap on concurrency.
Given a fixed concurrency, you'll saturate at some particular average
latency which is mostly a function of the backlog
One correction qu size in iostat ranges between 6-120. But still this doesn't
explain why read latency is low in cfstats.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/flush-largest-memtables-at-messages-in-7-4-tp6266221p6269875.html
Sent from t
Peter Schuller wrote:
>
> Saturated.
>
But read latency is still something like 30ms which I would think would be
much higher if it's saturated.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/flush-largest-memtables-at-messages-in-7-4-tp62662
> Does it really matter how long cassandra has been running? I thought it will
> keep keys of 1M at least.
It will keep up to the limit, and it will save caches periodically and
reload them on start. But the cache needs to be populated by traffic
first. If you wrote a bunch of data, enabled the ro
Does it really matter how long cassandra has been running? I thought it will
keep keys of 1M at least.
Regarding your previous question about queue size in iostat I see it ranging
from 114-300.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/fl
> One thing I am noticing is that cache hit rate is very low even though my
> cache key size is 1M and I have less than 1M rows. Not sure why so many
> cache miss?
The key cache should be strictly LRU for read-only workloads. For
write/read workloads it may not be strictly LRU because compaction
c
One thing I am noticing is that cache hit rate is very low even though my
cache key size is 1M and I have less than 1M rows. Not sure why so many
cache miss?
Keyspace: StressKeyspace
Read Count: 162506
Read Latency: 45.22479006928975 ms.
Write Count: 247180
Write La
> Yes
Without checking I don't know the details of the memtable threshold
calculations enough to be sure whether large columns are somehow
causing the size estimations to be ineffective (off hand I would
expect the reverse since the overhead of the Java object structures
become much less significa
Yes
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/flush-largest-memtables-at-messages-in-7-4-tp6266221p6266726.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
> Heap is 0.7802529021498031 full. You may need to reduce memtable and/or
> cache sizes Cassandra will now flush up to the two largest memtables to free
> up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if
> you don't want Cassandra to do this automatically
>
> How do I ver
64 bit 12 core 96 GB RAM
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/flush-largest-memtables-at-messages-in-7-4-tp6266221p6266400.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
your jvm heap has reached 78% so cassandra automatically flushes its memtables.
you need to explain more about your configuration. 32 or 64 bit OS, what is
max heap, how much ram installed?
If this happens under stress test conditions its probably understandable. you
should look into graph
I am using cassandra 7.4 and getting these messages.
Heap is 0.7802529021498031 full. You may need to reduce memtable and/or
cache sizes Cassandra will now flush up to the two largest memtables to free
up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if
you don't want Cassa
19 matches
Mail list logo