topic.
BR
MK
From: Michail Kotsiouros via user
Sent: Thursday, May 11, 2023 14:08
To: user@cassandra.apache.org
Subject: RE: Questions about high read latency and related metrics
Hello Erick,
No Max/Min/Mean vs Histogram difference is clear.
What confuses me is the description of those metrics
if those questions sound trivial.
BR
MK
From: Erick Ramirez
Sent: Thursday, May 11, 2023 13:16
To: user@cassandra.apache.org; Michail Kotsiouros
Subject: Re: Questions about high read latency and related metrics
Is it the concept of histograms that's not clear? Something else?
Is it the concept of histograms that's not clear? Something else?
>
Hello Erick,
Thanks a lot for the immediate reply but still the difference between those 2
metrics is not clear to me.
BR
MK
From: Erick Ramirez
Sent: Thursday, May 11, 2023 13:04
To: user@cassandra.apache.org
Subject: Re: Questions about high read latency and related metrics
The min/max/mean
The min/max/mean partition sizes are the sizes in bytes which are the same
statistics reported by nodetool tablestats.
EstimatedPartitionSizeHistogram is the distribution of partition sizes
within specified ranges (percentiles) and is the same histogram reported by
nodetool tablehistograms (in the
impact on Read latency.
What would be the appropriate metric to monitor from
PartitionSize and EstimatedPartitionSizeHistogram.
BR
Michail Kotisouros
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair = 'BLOCKING'
AND speculative_retry = '99p';
-Joe
On 11/29/2021 11:22 AM, Joe Obernberger wrote:
I have an 11 node cluster and am experiencing hig
I have an 11 node cluster and am experiencing high read latency on one
table. This table has ~112 million rows:
nodetool tablehistograms doc.origdoc
doc/origdoc histograms
Percentile Read Latency Write Latency SSTables Partition
Size Cell Count
(micros
;
> https://issues.apache.org/jira/browse/CASSANDRA-16465
>
> Le ven. 19 févr. 2021 à 10:10, Ahmed Eljami a
> écrit :
>
>> Hi folks,
>>
>> If this can help, we encountered the same behaviour with 3.11.9. We are
>> using LCS.
>> After upgrading from 3.11.3 to 3.11.9 in
.11.3 to 3.11.9 in Bench Environnement, Cassandra
> read latency 99% is multiplied by ~3
>
> We are planning a second test with 3.11.6, I'll send you the results when
> it's done.
> Cheers,
>
>
>
> Le lun. 15 févr. 2021 à 19:55, Jai Bheemsen Rao Dhanwada <
>
Hi folks,
If this can help, we encountered the same behaviour with 3.11.9. We are
using LCS.
After upgrading from 3.11.3 to 3.11.9 in Bench Environnement, Cassandra
read latency 99% is multiplied by ~3
We are planning a second test with 3.11.6, I'll send you the results when
it's do
version 3.11.6 to either
> > 3.11.7 or 3.11.8 I experience a significant increase in read latency
>
> Any update here? Does version 3.11.10 provide any improvement?
>
> Thanks,
> Johannes
>
> ---
Hi Nico,
On Mon, Sep 14, 2020 at 03:51PM +0200, Nicolai Lune Vest wrote:
> after upgrading my Cassandra nodes from version 3.11.6 to either
> 3.11.7 or 3.11.8 I experience a significant increase in read latency
Any update here? Does version 3.11.10 provide any improvement?
Thanks,
Jo
Hi,
I'm not sure if this will help, but I tried today to change one node to
3.11.9 from 3.11.6. We are NOT using TWCS. Very heavy read pattern, almost
no writes, with constant performance test load. Cassandra read latency 99%
increased significantly, but NOT on the node where I changed ve
Thanks Paulo! I'll give it a try and let you know.
-"Paulo Motta" schrieb: -
An: user@cassandra.apache.org
Von: "Paulo Motta"
Datum: 03.12.2020 13:39
Betreff: Re: Increased read latency with Cassandra >= 3.11.7
As a workaround if your TWCS table i
kload (schema, replication settings, CL, query, etc)
> to facilitate investigation.
>
> Em qua., 2 de dez. de 2020 às 12:44, Nicolai Lune Vest <
> nicolai.v...@lancom.de> escreveu:
>
>> Hi,
>>
>> we performed a test run with 3.11.9. Unfortunately we do not exp
dez. de 2020 às 12:44, Nicolai Lune Vest <
nicolai.v...@lancom.de> escreveu:
> Hi,
>
> we performed a test run with 3.11.9. Unfortunately we do not experience a
> difference regarding increased read latency at some of our tables.
>
> Do others experience the same or
Hi,
we performed a test run with 3.11.9. Unfortunately we do not experience a
difference regarding increased read latency at some of our tables.
Do others experience the same or similar behavior and probably do have a
solution?
Kind regards,
Nico
-"Nicolai Lune Vest"
. I'll let you know about the results.
Cheers
Nico
-"Jai Bheemsen Rao Dhanwada" schrieb: -
An: "user@cassandra.apache.org"
Von: "Jai Bheemsen Rao Dhanwada"
Datum: 05.11.2020 17:36
Betreff: Increased read latency with Cassandra >= 3.11.7
Hello Nico
.
>
> Le mer. 4 nov. 2020 à 14:53, Johannes Weißl a écrit :
>
>> Hi Nico,
>>
>> On Mon, Sep 14, 2020 at 03:51PM +0200, Nicolai Lune Vest wrote:
>> > after upgrading my Cassandra nodes from version 3.11.6 to either
>> > 3.11.7 or 3.11.8 I experience a
rsion 3.11.6 to either
> > 3.11.7 or 3.11.8 I experience a significant increase in read latency.
>
> Have you (or others) any update on that? Does the version 3.11.9
> released today provide any improvement?
>
> Thanks,
> Johannes
>
> -
Hi Nico,
On Mon, Sep 14, 2020 at 03:51PM +0200, Nicolai Lune Vest wrote:
> after upgrading my Cassandra nodes from version 3.11.6 to either
> 3.11.7 or 3.11.8 I experience a significant increase in read latency.
Have you (or others) any update on that? Does the version 3.11.9
released
Hi Maxim,
unfortunately no. We decided to stay at 3.11.6 for production until we
found a solution because our system is latency sensitive too.
>From what I can see the most impact on read-latency is on the tables using
time windowed compaction strategy. Tables with e.g. leveled compact
r Jambhulkar" schrieb: -
>> An: user@cassandra.apache.org
>> Von: "Sagar Jambhulkar"
>> Datum: 14.09.2020 16:25
>> Betreff: Re: Increased read latency with Cassandra >= 3.11.7
>>
>>
>> Maybe compare the cache size see if anything diff
ache.org
> Von: "Sagar Jambhulkar"
> Datum: 14.09.2020 16:25
> Betreff: Re: Increased read latency with Cassandra >= 3.11.7
>
>
> Maybe compare the cache size see if anything different in two versions?
>
>
> On Mon, 14 Sep 2020, 19:21 Nicolai Lune Vest,
&
Maybe compare the cache size see if anything different in two versions?
On Mon, 14 Sep 2020, 19:21 Nicolai Lune Vest,
wrote:
> Dear Cassandra community,
>
> after upgrading my Cassandra nodes from version 3.11.6 to either 3.11.7
> or 3.11.8 I experience a significant increase in
Dear Cassandra community,
after upgrading my Cassandra nodes from version 3.11.6 to either 3.11.7 or
3.11.8 I experience a significant increase in read latency.
With 3.11.6 average read latency is ~0.35 ms. With 3.11.7 average read
latency increases to ~2.9 ms, almost 10 times worse! This
quires a C* process reboot at least around 2.2.8. Is
> this true?
>
>
>
>
>
>
> Thank you
>
>
>
> *From: *Nitan Kainth
> *Sent: *Monday, June 11, 2018 10:40 AM
> *To: *user@cassandra.apache.org
> *Subject: *Re: Read Latency Doubles After Shrin
8. Is
> this true?
>
>
>
> Thank you
>
> From: Nitan Kainth
> Sent: Monday, June 11, 2018 10:40 AM
> To: user@cassandra.apache.org
> Subject: Re: Read Latency Doubles After Shrinking Cluster and Never Recovers
>
> I think it would because it Cassandra will proce
Subject: Re: Read Latency Doubles After Shrinking Cluster and Never Recovers
I think it would because it Cassandra will process more sstables to create
response to read queries.
Now after clean if the data volume is same and compaction has been running, I
can’t think of any more diagnostic step
d it have impacted read latency the
> fact that some nodes still have sstables that they no longer need?
>
> Thanks
>
>
> Thank you
>
> From: Nitan Kainth
> Sent: Monday, June 11, 2018 10:18 AM
> To: user@cassandra.apache.org
> Subject: Re: Rea
Yes we did after adding the three nodes back and a full cluster repair as well.
But even it we didn’t run cleanup, would it have impacted read latency the fact
that some nodes still have sstables that they no longer need?
Thanks
Thank you
From: Nitan Kainth
Sent: Monday
Did you run cleanup too?
On Mon, Jun 11, 2018 at 10:16 AM, Fred Habash wrote:
> I have hit dead-ends every where I turned on this issue.
>
> We had a 15-node cluster that was doing 35 ms all along for years. At
> some point, we made a decision to shrink it to 13. Read latency rose
I have hit dead-ends every where I turned on this issue.
We had a 15-node cluster that was doing 35 ms all along for years. At some
point, we made a decision to shrink it to 13. Read latency rose to near 70
ms. Shortly after, we decided this was not acceptable, so we added the
three nodes back
Hi Jeff,
Thank you very much for your response.
Your considerations are definitely right but, at this point, I just want to
consider the Cassandra response time on different Azure VMs size.
Yes, the YCSB GC can impact on it but the total time that YCSB spent with
the GC is ~ 3% of the total experi
> On Mar 5, 2018, at 6:52 AM, D. Salvatore wrote:
>
> Hello everyone,
> I am benchmarking a Cassandra installation on Azure composed of 4 nodes
> (Standard_D2S_V3 - 2vCPU and 8GB ram) with a replication factor of 2.
Bit smaller than most people would want to run in production.
> To benchma
Hello everyone,
I am benchmarking a Cassandra installation on Azure composed of 4 nodes
(Standard_D2S_V3 - 2vCPU and 8GB ram) with a replication factor of 2.
To benchmark this testbed, I am using a single YCSB instance with the
workload C (100% read request), a Consistency level ONE and only 10 cli
e Cassandra Consulting
http://www.thelastpickle.com
2018-03-02 14:42 GMT+00:00 Fd Habash :
> This is a 2.8.8. cluster with three AWS AZs, each with 4 nodes.
>
>
>
> Few days ago we noticed a single node’s read latency reaching 1.5 secs
> there was 8 others with read latencies
This is a 2.8.8. cluster with three AWS AZs, each with 4 nodes.
Few days ago we noticed a single node’s read latency reaching 1.5 secs there
was 8 others with read latencies going up near 900 ms.
This single node was a seed node and it was running a ‘repair -pr’ at the time.
We intervened as
gt;>
>>>
>>>
>>> When shrinking the cluster, the ‘nodetool decommision’ was eventless. It
>>> completed successfully with no issues.
>>>
>>>
>>>
>>> What could possibly cause repairs to cause this impact following cluster
>>> downsizing? Taking three nodes out does not seem compatible with such a
>>> drastic effect on repair and read latency.
>>>
>>>
>>>
>>> Any expert insights will be appreciated.
>>>
>>>
>>> Thank you
>>>
>>>
>>>
>>
>>
_throughput_mb_per_sec at 64. The /data dir on the nodes is
>>>> around ~500GB at 44% usage.
>>>>
>>>>
>>>>
>>>> When shrinking the cluster, the ‘nodetool decommision’ was eventless.
>>>> It completed successfully with no issues.
>>>>
>>>>
>>>>
>>>> What could possibly cause repairs to cause this impact following
>>>> cluster downsizing? Taking three nodes out does not seem compatible with
>>>> such a drastic effect on repair and read latency.
>>>>
>>>>
>>>>
>>>> Any expert insights will be appreciated.
>>>>
>>>>
>>>> Thank you
>>>>
>>>>
>>>>
>>>
>>>
>
r_sec is set at 200 and
>>> compaction_throughput_mb_per_sec at 64. The /data dir on the nodes is
>>> around ~500GB at 44% usage.
>>>
>>>
>>>
>>> When shrinking the cluster, the ‘nodetool decommision’ was eventless. It
>>> completed successful
no issues.
>>
>>
>>
>> What could possibly cause repairs to cause this impact following cluster
>> downsizing? Taking three nodes out does not seem compatible with such a
>> drastic effect on repair and read latency.
>>
>>
>>
>> Any expert insights will be appreciated.
>>
>>
>> Thank you
>>
>>
>>
>
>
gt;
>> What could possibly cause repairs to cause this impact following cluster
>> downsizing? Taking three nodes out does not seem compatible with such a
>> drastic effect on repair and read latency.
>>
>>
>>
>> Any expert insights will be appreciated.
>>
>>
>> Thank you
>>
>>
>>
>
>
luster, the ‘nodetool decommision’ was eventless. It
> completed successfully with no issues.
>
> What could possibly cause repairs to cause this impact following cluster
> downsizing? Taking three nodes out does not seem compatible with such a
> drastic effect on repair
er, the ‘nodetool decommision’ was eventless. It
> completed successfully with no issues.
>
>
>
> What could possibly cause repairs to cause this impact following cluster
> downsizing? Taking three nodes out does not seem compatible with such a
> drastic effect on repair and read laten
cluster, the ‘nodetool decommision’ was eventless. It
completed successfully with no issues.
What could possibly cause repairs to cause this impact following cluster
downsizing? Taking three nodes out does not seem compatible with such a drastic
effect on repair and read latency.
Any expert
cluster have 3 nodes in DC1 and 3 nodes in DC2
> * The keyspace is originally created in DC1 only with RF=2
> * The client had good read latency about 40 ms of 99 percentile under 100
> requests/sec (measured at the client side)
> * Then keyspace is updated with 2-DC and RF=3 for each DC
&g
I'm using cassandra java driver to access a small cassandra cluster
* The cluster have 3 nodes in DC1 and 3 nodes in DC2
* The keyspace is originally created in DC1 only with RF=2
* The client had good read latency about 40 ms of 99 percentile under 100
requests/sec (measured at the client
08080808080 ORDER BY time_token ASC LIMIT 2000
Let's say we start from scratch, and let's say I do get 2000 rows for the 12
keys in 100ms (an arbitrary easy number), what would the Read Latency from
"noodtool cfstats keyspace.event_index" say? Is it 100/2000 = 0.05 ms (by
https://www.youtube.com/watch?v=7B_w6YDYSwA
On Sun, Sep 27, 2015 at 6:20 PM Jaydeep Chovatia
wrote:
> Read requires avg. 6 sstables and my read latency is 42 ms. so on avg. we
> can say Cassandra is taking 7ms to process data from one sstable *which
> is entirely in memory*. I think there is s
Read requires avg. 6 sstables and my read latency is 42 ms. so on avg. we
can say Cassandra is taking 7ms to process data from one sstable *which is
entirely in memory*. I think there is something wrong here. If we go with
this math then we can say Cassandra latency would be always > 7ms for m
;>>>>
>>>>>> h int,
>>>>>>
>>>>>> i text,
>>>>>>
>>>>>> j text,
>>>>>>
>>>>>> k text,
>>>>>>
>>>>>> l text,
>>>>>>
>>>>
gt;>>> h int,
>>>>>
>>>>> i text,
>>>>>
>>>>> j text,
>>>>>
>>>>> k text,
>>>>>
>>>>> l text,
>>>>>
>>>>> m set
>>>>>
>>>>> n bigint
&
t;>> o bigint
>>>>
>>>> p bigint
>>>>
>>>> q bigint
>>>>
>>>> r int
>>>>
>>>> s text
>>>>
>>>> t bigint
>>>>
>>>> u text
>>>>
>>>> v
; u text
>>
>> v text
>>
>> w text
>>
>> x bigint
>>
>> y bigint
>>
>> z bigint,
>>
>> primary key ((a, b), c)
>>
>> };
>>
>> - JVM settings about the heap
>>
>> Default se
>>
>>> v text
>>>
>>> w text
>>>
>>> x bigint
>>>
>>> y bigint
>>>
>>> z bigint,
>>>
>>> primary key ((a, b), c)
>>>
>>> };
>>>
>>> - JVM settings about the hea
..@worldline.com>> wrote:
> Hi,
>
>
>
>
>
> Before speaking about tuning, can you provide some additional information ?
>
>
>
> - Number of req/s
>
> - Schema details
>
> - JVM settings about the hea
ion time of the GC
>>
>> Avg. 400ms. I do not see long pauses of GC anywhere in the log file.
>>
>> On Tue, Sep 22, 2015 at 5:34 AM, Leleu Eric
>> wrote:
>>
>>> Hi,
>>>
>>>
>>>
>>>
>>>
>>> Before sp
For read heavy workload, JVM GC can cause latency issue. (see
http://tech.shift.com/post/74311817513/cassandra-tuning-the-jvm-for-read-heavy-workloads)
If you have frequent minor GC taking 400ms, it may increase your read latency.
Eric
De : Jaydeep Chovatia [mailto:chovatia.jayd...@gmail.com
bigint
>>
>> z bigint,
>>
>> primary key ((a, b), c)
>>
>> };
>>
>> - JVM settings about the heap
>>
>> Default settings
>>
>> - Execution time of the GC
>>
>> Avg. 400ms. I do not see long pa
;>
>> Before speaking about tuning, can you provide some additional information
>> ?
>>
>>
>>
>> - Number of req/s
>>
>> - Schema details
>>
>> - JVM settings about the heap
>>
>> - Exe
Execution time of the GC
>
>
>
> 43ms for a read latency may be acceptable according to the number of
> request per second.
>
>
>
>
>
> Eric
>
>
>
> *De :* Jaydeep Chovatia [mailto:chovatia.jayd...@gmail.com]
> *Envoyé :* mardi 22 septembr
Hi,
Before speaking about tuning, can you provide some additional information ?
- Number of req/s
- Schema details
- JVM settings about the heap
- Execution time of the GC
43ms for a read latency may be acceptable according to the number of request
per
Hi,
My application issues more read requests than write, I do see that under
load cfstats for one of the table is quite high around 43ms
Local read count: 114479357
Local read latency: 43.442 ms
Local write count: 22288868
Local
int,value double, PRIMARY KEY(row_time, attrs, offset)) WITH COMPACT
>>>>>> STORAGE AND bloom_filter_fp_chance=0.01 AND caching='KEYS_ONLY' AND
>>>>>> comment='' AND dclocal_read_repair_chance=0 AND gc_grace_seconds=864000
>>>>
27;false' AND default_time_to_live=0 AND
>>>>> speculative_retry='NONE' AND memtable_flush_period_in_ms=0 AND
>>>>> compaction={'class':'DateTieredCompactionStrategy','timestamp_resolution':'MILLISECONDS'}
>
gt;> GB of heap space. So it's timeseries data that I'm doing so I increment
>>>> "row_time" each day, "attrs" is additional identifying information about
>>>> each series, and "offset" is the number of milliseconds into the day
ot; is the number of milliseconds into the day for
>>> each data point. So for the past 5 days, I've been inserting 3k
>>> points/second distributed across 100k distinct "attrs"es. And now when I
>>> try to run queries on this data that look like
>>
information about
>> each series, and "offset" is the number of milliseconds into the day for
>> each data point. So for the past 5 days, I've been inserting 3k
>> points/second distributed across 100k distinct "attrs"es. And now when I
>> try to r
5 days, I've been inserting 3k
> points/second distributed across 100k distinct "attrs"es. And now when I
> try to run queries on this data that look like
>
> "SELECT * FROM "default".metrics WHERE row_time = 5 AND attrs =
> 'potatoes_and_jam'&qu
efault".metrics WHERE row_time = 5 AND attrs =
'potatoes_and_jam'"
it takes an absurdly long time and sometimes just times out. I did "nodetool
cftsats default" and here's what I get:
Keyspace: default
Read Count: 59
Read Latency: 397.125237288135
3k
points/second distributed across 100k distinct "attrs"es. And now when I
try to run queries on this data that look like
"SELECT * FROM "default".metrics WHERE row_time = 5 AND attrs =
'potatoes_and_jam'"
it takes an absurdly long time and sometimes
There's likely 2 things occurring
1) the cfhistograms error is due to
https://issues.apache.org/jira/browse/CASSANDRA-8028
Which is resolved in 2.1.3. Looks like voting is under way for 2.1.3. As
rcoli mentioned, you are running the latest open source of C* which should
be treated as beta until a
Hi there,
The compaction remains running with our workload.
We are using SATA HDDs RAIDs.
When trying to run cfhistograms on our user_data table, we are getting
this message:
nodetool: Unable to compute when histogram overflowed
Please see what happens when running some queries on this cf:
http:
Hello
You may not be experiencing versioning issues. Do you know if compaction is
keeping up with your workload? The behavior described in the subject is
typically associated with compaction falling behind or having a suboptimal
compaction strategy configured. What does the output of nod
an Tarbox [mailto:briantar...@gmail.com]
Sent: Friday, January 9, 2015 8:56 AM
To: user@cassandra.apache.org
Subject: Re: High read latency after data volume increased
C* seems to have more than its share of "version x doesn't work, use version y
" type issues
On Thu, Jan 8, 2015
C* seems to have more than its share of "version x doesn't work, use
version y " type issues
On Thu, Jan 8, 2015 at 2:23 PM, Robert Coli wrote:
> On Thu, Jan 8, 2015 at 11:14 AM, Roni Balthazar
> wrote:
>
>> We are using C* 2.1.2 with 2 DCs. 30 nodes DC1 and 10 nodes DC2.
>>
>
> https://eng
On Thu, Jan 8, 2015 at 6:38 PM, Roni Balthazar
wrote:
> We downgraded to 2.1.1, but got the very same result. The read latency is
> still high, but we figured out that it happens only using a specific
> keyspace.
>
Note that downgrading is officially unsupported, but is probably
Hi Robert,
We downgraded to 2.1.1, but got the very same result. The read latency is
still high, but we figured out that it happens only using a specific
keyspace.
Please see the graphs below...
Trying another keyspace with 600+ reads/sec, we are getting the acceptable
~30ms read latency.
Let
On Thu, Jan 8, 2015 at 11:14 AM, Roni Balthazar
wrote:
> We are using C* 2.1.2 with 2 DCs. 30 nodes DC1 and 10 nodes DC2.
>
https://engineering.eventbrite.com/what-version-of-cassandra-should-i-run/
2.1.2 in particular is known to have significant issues. You'd be better
off running 2.1.1 ...
Hi there,
We are using C* 2.1.2 with 2 DCs. 30 nodes DC1 and 10 nodes DC2.
While our data volume is increasing (34 TB now), we are running into
some problems:
1) Read latency is around 1000 ms when running 600 reads/sec (DC1
CL.LOCAL_ONE). At the same time the load average is about 20-30 on all
@cassandra.apache.org
Cc: Chris Lohfink
Subject: Re: no change observed in read latency after switching from EBS to SSD
storage
It is possible this is CPU bound. In 2.1 we have optimised the comparison of
clustering columns
(CASSANDRA-5417<https://issues.apache.org/jira/browse/CASSANDRA-5417>),
t;
>Mohammed
>
>From:Chris Lohfink [mailto:clohf...@blackbirdit.com]
>Sent: Wednesday, September 17, 2014 7:17 PM
>
>To: user@cassandra.apache.org
>Subject: Re: no change observed in read latency after switching from EBS to
>SSD storage
>
>"Read 193311 live and
>
>
>
> *From:* Chris Lohfink [mailto:clohf...@blackbirdit.com]
> *Sent:* Wednesday, September 17, 2014 7:17 PM
>
> *To:* user@cassandra.apache.org
> *Subject:* Re: no change observed in read latency after switching from
> EBS to SSD storage
>
>
>
> "Rea
en in that case reading from
a local SSD should have been a lot faster than reading from non-provisioned EBS.
Mohammed
From: Chris Lohfink [mailto:clohf...@blackbirdit.com]
Sent: Wednesday, September 17, 2014 7:17 PM
To: user@cassandra.apache.org
Subject: Re: no change observed in read latency
> 21:57:16,916 | 10.10.100.5 | 86494
> Merging data from memtables and 3 sstables | 21:57:16,916 | 10.10.100.5 |
> 86522
> Read 193311 live and 0 tombstoned cells | 21:57:24,552 | 10.10.100.5 |
> 7722425
> Request complete | 21:57:29,074 | 10.10.100.5 |
10.100.5 | 12244832
Mohammed
From: Alex Major [mailto:al3...@gmail.com]
Sent: Wednesday, September 17, 2014 3:47 AM
To: user@cassandra.apache.org
Subject: Re: no change observed in read latency after switching from EBS to SSD
storage
When you say you moved from EBS to SSD, do you mean the
On Tue, Sep 16, 2014 at 10:00 PM, Mohammed Guller
wrote:
> The 10 seconds latency that I gave earlier is from CQL tracing. Almost 5
> seconds out of that was taken up by the “merge memtable and sstables” step.
> The remaining 5 seconds are from “read live and tombstoned cells.”
>
Could you past
s running on the same node.
>
>
>
> Is there any performance tuning parameter in the cassandra.yaml file for
> large reads?
>
>
>
> Mohammed
>
>
>
> *From:* Robert Coli [mailto:rc...@eventbrite.com]
> *Sent:* Tuesday, September 16, 2014 5:42 PM
> *To:* user@ca
...@eventbrite.com]
Sent: Tuesday, September 16, 2014 5:42 PM
To: user@cassandra.apache.org
Subject: Re: no change observed in read latency after switching from EBS to SSD
storage
On Tue, Sep 16, 2014 at 5:35 PM, Mohammed Guller
mailto:moham...@glassbeam.com>> wrote:
Does anyone have insight as to why we
> wrote:
>
>
> Hi -
>
> We are running Cassandra 2.0.5 on AWS on m3.large instances. These
> instances were using EBS for storage (I know it is not recommended). We
> replaced the EBS storage with SSDs. However, we didn't see any change in
> read latency. A query tha
not recommended). We replaced the EBS
storage with SSDs. However, we didn't see any change in read latency. A query
that took 10 seconds when data was stored on EBS still takes 10 seconds even
after we moved the data directory to SSD. It is a large query returning 200,000
CQL rows from a s
in San Jose area or remote.
> Mailbox dimensions: 10"x12"x14"
>
> --
> *From:* Robert Coli
> *To:* "user@cassandra.apache.org"
> *Sent:* Tuesday, September 16, 2014 5:42 PM
> *Subject:* Re: no change observed in read laten
Sent: Tuesday, September 16, 2014 5:42 PM
Subject: Re: no change observed in read latency after switching from EBS to SSD
storage
On Tue, Sep 16, 2014 at 5:35 PM, Mohammed Guller wrote:
Does anyone have insight as to why we don't see any performance impact on the
reads going from E
On Tue, Sep 16, 2014 at 5:35 PM, Mohammed Guller
wrote:
> Does anyone have insight as to why we don't see any performance impact on
> the reads going from EBS to SSD?
>
What does it say when you enable tracing on this CQL query?
10 seconds is a really long time to access anything in Cassandra.
Hi -
We are running Cassandra 2.0.5 on AWS on m3.large instances. These instances
were using EBS for storage (I know it is not recommended). We replaced the EBS
storage with SSDs. However, we didn't see any change in read latency. A query
that took 10 seconds when data was stored on EBS
#x27; AND
> compaction={'sstable_size_in_mb': '160', 'class':
> 'LeveledCompactionStrategy'} AND
> compression={'sstable_compression': 'SnappyCompressor'};
>
> I am noticing that the read latency is very high considering whe
_cache_on_flush='false' AND
compaction={'sstable_size_in_mb': '160', 'class':
'LeveledCompactionStrategy'} AND
compression={'sstable_compression': 'SnappyCompressor'};
I am noticing that the read latency is very high considering wh
> The spikes in latency don’t seem to be correlated to an increase in reads.
> The cluster’s workload is usually handling a maximum workload of 4200
> reads/sec per node, with writes being significantly less, at ~200/sec per
> node. Usually it will be fine with this, with read latencies at aroun
1 - 100 of 233 matches
Mail list logo