Re: Cassandra 1.2, wide row and secondary index question

2013-01-15 Thread Sylvain Lebresne
On Mon, Jan 14, 2013 at 11:55 PM, aaron morton wrote:

> Sylvain,
> Out of interest if the select is…
>
> select * from  test where interval = 7  and severity = 3 order by id desc
> ;
>
> Would the the ordering be a no-op or would it still run ?
>

Yes, as Shahryar said this is currently rejected because ORDER BY is not
supported on 2ndary indexes queries (cause we don't know how to do them
efficiently). Tbh, the example here is a special case where we could, in
theory, support ORDER BY because the partition key is fixed by the query. I
guess that's a todo.

Or more generally does including an ORDER BY clause that matches the
> CLUSTERING ORDER BY DDL clause incur overhead?
>

In general no, there is no overheard. The one case where there is an
overhead is with queries where the partition key is an IN (i.e. when we do
the equivalent of a multiget), because in that case we do query all the
partitions and then merge sort the results. But in that case, not using an
ORDER BY will *not* yield the same result than using an ORDER BY that
matches the CLUSTERING ORDER BY, so I suppose it shouldn't come as a
surprise that there is an overhead.

--
Sylvain


> Cheers
> A
> -
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 15/01/2013, at 6:56 AM, Sylvain Lebresne  wrote:
>
> On Mon, Jan 14, 2013 at 5:04 PM, Shahryar Sedghi wrote:
>
>> Can I always count on this order, or it may change  in the future?
>>
>
> I would personally rely on it. I don't see any reason why we would change
> that internally and besides I suspect you won't be the only one to rely on
> it so we won't take the chance of breaking it.
>
> However, I do note that this stands for Cassandra 2ndary indexes only.
> Internally, Cassandra has a notion of custom indexes (used by DataStax Solr
> integration for instance) and for those indexes the ordering might likely
> not be the same. So if you think you might switch your index to a solr one
> later on, then maybe it's worth trying to avoid relying on the ordering.
>
> --
> Sylvain
>
>
>>
>> Thanks in Advance
>>
>> Shahryar
>> --
>> "Life is what happens while you are making other plans." ~ John Lennon
>>
>
>
>


RE: Starting Cassandra

2013-01-15 Thread Sloot, Hans-Peter
I managed to install apache-cassandra-1.2.0-bin.tar.gz 


With java-1.6.0-openjdk-1.6.0.0-1.45.1.11.1.el6.x86_64 I still get the 
segmentation fault.
However with java-1.7.0-openjdk-1.7.0.3-2.1.0.1.el6.7.x86_64 everything runs 
fine.

Regards Hans-Peter

From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: dinsdag 15 januari 2013 1:20
To: user@cassandra.apache.org
Subject: Re: Starting Cassandra

DSE includes hadoop files. It looks like the installation is broken. I would 
start again if possible and/or ask the peeps at Data Stax about your particular 
OS / JVM configuration.

In the past I've used this to set a particular JVM when multiple ones are 
installed...

update-alternatives --set java /usr/lib/jvm/java-6-sun/jre/bin/java

Cheers

-
Aaron Morton
Freelance Cassandra Developer
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 11/01/2013, at 10:55 PM, "Sloot, Hans-Peter" 
mailto:hans-peter.sl...@atos.net>> wrote:


Hi,
I removed the open-jdk packages which caused the dse* packages to be 
uninstalled too and installed jdk6u38.

But when I installed the dse packages yum also downloaded and installed the 
open-jdk packages.

After that I installed java-1.7.0-openjdk.x86_64.

When starting Cassandra I now get:
-bash-4.1$ /usr/sbin/cassandra -f
xss =  -ea -javaagent:/lib/jamm-0.2.5.jar -XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 -Xms1024M -Xmx1024M -Xmn200M 
-XX:+HeapDumpOnOutOfMemoryError -Xss180k
INFO 10:38:21,262 Logging initialized
ERROR 10:38:21,320 Exception encountered during startup
java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/JobConf
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:791)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
at com.datastax.bdp.server.DseServer.(DseServer.java:112)
at com.datastax.bdp.hadoop.mapred.SchemaTool.init(SchemaTool.java:408)
at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:113)
at 
org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:389)
at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:350)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapred.JobConf
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
... 17 more

I did not install Hadoop on this cluster but apparently it wants to use it.

Should I first build a Hadoop cluster?

Regards Hans-Peter


From: Yang Song [mailto:xfil...@gmail.com]
Sent: donderdag 10 januari 2013 19:22
To: user@cassandra.apache.org
Subject: Re: Starting Cassandra

Could you also let us know if switching openjdk to jdk@oracle indeed solves the 
problem?
Thanks!

Yang
2013/1/10 Sloot, Hans-Peter 
mailto:hans-peter.sl...@atos.net>>
I have increased the memory to 4096. Did not help

It is openjdk indeed.
java-1.6.0-openjdk.x86_64
1:1.6.0.0-1.49.1.11.4.el6_3installed

I will try jdk  1.6._38 from oracle.com

Regards Hans-Peter

From: Vladi Feigin [mailto:vladi...@gmail.com]
Sent: donderdag 10 januari 2013 17:40
To: user@cassandra.apache.org
Subject: Re: Starting Cassandra


Hi

I had this problem with openJdk ,moving to jdk solved the problem
On Jan 10, 2013 5:23 PM, "Andrea Gazzarini" 
mailto:andrea.gazzar...@gmail.com>> wrote:
Hi,
I'm running Cassandra with 1.6_24 and all it's working, so probably the problem 
is elsewhere. What about your hardware / SO configuration?

On 01/10/2013 04:19 PM, Sloot, Hans-Peter wrote:
The java version is 1.6_24.

The manual said that 1.7 was not the best choice.

But I will try it.


-Origineel bericht-
Van: adeel.a

Re: Cassandra 1.2 thrift migration

2013-01-15 Thread Vivek Mishra
Hi,
Is there any document to follow, in case i migrate cassandra thrift API to
1.2 release? Is it backward compatible with previous releases?
While migrating Kundera to cassandra 1.2, it is complaining on various data
types. Giving weird errors like:

While connecting from cassandra-cli:

"
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.lang.AbstractStringBuilder.(AbstractStringBuilder.java:45)
at java.lang.StringBuilder.(StringBuilder.java:80)
at java.math.BigDecimal.getValueString(BigDecimal.java:2885)
at java.math.BigDecimal.toPlainString(BigDecimal.java:2869)
at org.apache.cassandra.cql.jdbc.JdbcDecimal.getString(JdbcDecimal.java:72)
at
org.apache.cassandra.db.marshal.DecimalType.getString(DecimalType.java:62)
at org.apache.cassandra.cli.CliClient.printSliceList(CliClient.java:2873)
at org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1486)
at
org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:272)
at
org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:210)

at org.apache.cassandra.cli.CliMain.main(CliMain.java:337)
"


And sometimes results in Server Crash.


Any idea whether interoperability b/w Thrift and CQL should work properly
in 1.2?

-Vivek


Astyanax returns empty row

2013-01-15 Thread Sávio Teles
I'm currently using Astyanax 1.56.21 to retrieve a entire row. My code:

ColumnList result = keyspace.prepareQuery(cf_name)
.getKey(key)
.execute().getResult();

But, sometimes Astyanax returns a empty row for a specific key. For
example, on first attempt Astyanax returns a empty row for a specific
key, but on the second attempt it returns the desired row.
Can someone help me?

Thanks in advance.



-- 
Atenciosamente,
Sávio S. Teles de Oliveira
voice: +55 62 9136 6996
http://br.linkedin.com/in/savioteles
Mestrando em Ciências da Computação - UFG
Arquiteto de Software
Laboratory for Ubiquitous and Pervasive Applications (LUPA) - UFG


PlayOrm latest release is available in Maven now

2013-01-15 Thread Vikas Goyal
For those who are using playorm for cassandra, latest release (1.4.4) is
available in maven repo now. It has following new features:


   - Support for @NoSqlEmbedable for user defined entities.
   - In SJQL, Ability to
  - Delete rows(DELETE),
  - Delete a single column(DELETECOLUMN),
  - Query with IN attribute and
  - Also ORDER BY ..is now added


   - Astyanax version is upgraded to 1.56.18
   - Storage type of *ToOne is changed to composite.
   - Command line tool now provides support for @NoSqlEmbedded and
   @NoSqlEmbedable
   - Added support for DateTime as index and also support for querying the
   JodaTimes
   -


Re: Astyanax returns empty row

2013-01-15 Thread Hiller, Dean
What is your consistency level set to?  If you set it to CL_ONE, you could get 
different results or is your database constant and unchanging?

Dean

From: Sávio Teles 
mailto:savio.te...@lupa.inf.ufg.br>>
Reply-To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Date: Tuesday, January 15, 2013 5:43 AM
To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Subject: Astyanax returns empty row


sometimes Astyanax returns a empty row for a specific key. For example, on 
first attempt Astyanax returns a empty row for a specific key, but on the 
second attempt it returns the desired row.


Re: error when creating column family using cql3 and persisting data using thrift

2013-01-15 Thread James Schappet
I also saw this while testing the
https://github.com/boneill42/naughty-or-nice example project.




--Jimmy


From:  Kuldeep Mishra 
Reply-To:  
Date:  Tuesday, January 15, 2013 10:29 AM
To:  
Subject:  error when creating column family using cql3 and persisting data
using thrift

Hi,
I am facing following problem, when creating column family using cql3 and
trying to persist data using thrift 1.2.0
in cassandra-1.2.0.

Details: 
InvalidRequestException(why:Not enough bytes to read value of component 0)
at 
org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.jav
a:20833)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.jav
a:964)
at 
org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950
)
at 
com.impetus.client.cassandra.thrift.ThriftClient.onPersist(ThriftClient.java
:157)



Please help me.


-- 
Thanks and Regards
Kuldeep Kumar Mishra
+919540965199




Re: error when creating column family using cql3 and persisting datausing thrift

2013-01-15 Thread Dave Brosius
The statements used to create and populate the data might be mildly useful for 
those trying to help   - Original Message -From: "Kuldeep 
Mishra" >;kuld.cs.mis...@gmail.com 

write count increase after 1.2 update

2013-01-15 Thread Reik Schatz
Hi, we are running a 1.1.6 (datastax) test cluster with 6 nodes. After the
recent 1.2 release we have set up a second cluster - also having 6 nodes
running 1.2 (datastax).

They are now running in parallel. We noticed an increase in the number of
writes in our monitoring tool (Datadog). The tool is using the write count
statistic of nodetool cfstats. So we ran nodetool cfstats on one node in
each cluster. To get an initial write count. Then we ran it again after 60
sec. It looks like the 1.2 received about twice the amount of writes.

The way our application is designed is that the writes are idempotent, so
we don't see a size increase. Were there any changes in between 1.1.6 > 1.2
that could explain this behavior?

I know that 1.2 has the concept of virtual nodes, to spread out the data
more evenly. So if the "write count" value was actually the sum of all
writes to all nodes in the, this increase would make sense.

Reik

ps. the clusters are not 100% identical. i.e. since bloom filters are now
off-heap, we changed settings for heap size and memtables. Cluster 1.1.6:
heap 8G, memtables 1/3 of heap. Cluster 1.2.0: heap 4G, memtables 2G. Not
sure it can have an impact on the problem.


Retrieving data between two timestamps

2013-01-15 Thread Renato Marroquín Mogrovejo
Hi all,

I am having some problems while retrieving some events from a column
family I have created.
My column family has been created as follows:

create column family click_event
  WITH comparator = UTF8Type and
  column_metadata = [ {column_name: event, validation_class: UTF8Type} ];

My table is populated as follows:

 list click_events;
---
=> (column=start:2013-01-13 18:14:59.244, value=, timestamp=1358118943979000)
=> (column=stop:2013-01-13 18:15:56.793,
value=323031332d30312d31332031383a31353a35382e333437,
timestamp=1358118960946000)

I have two questions here:
1) What is the timestamp column used for?
2) How can I retrieve this timestamp column using Hector client?

Thanks in advance!


Renato M.


Re: Retrieving data between two timestamps

2013-01-15 Thread Aaron Turner
I don't think so.  Usually you'd use either a Time-UUID or something
like epoch time as the column name to get a range of columns by time
range.

On Tue, Jan 15, 2013 at 10:46 AM, Renato Marroquín Mogrovejo
 wrote:
> Hi all,
>
> I am having some problems while retrieving some events from a column
> family I have created.
> My column family has been created as follows:
>
> create column family click_event
>   WITH comparator = UTF8Type and
>   column_metadata = [ {column_name: event, validation_class: UTF8Type} ];
>
> My table is populated as follows:
>
>  list click_events;
> ---
> => (column=start:2013-01-13 18:14:59.244, value=, timestamp=1358118943979000)
> => (column=stop:2013-01-13 18:15:56.793,
> value=323031332d30312d31332031383a31353a35382e333437,
> timestamp=1358118960946000)
>
> I have two questions here:
> 1) What is the timestamp column used for?
> 2) How can I retrieve this timestamp column using Hector client?
>
> Thanks in advance!
>
>
> Renato M.



-- 
Aaron Turner
http://synfin.net/ Twitter: @synfinatic
http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows
Those who would give up essential Liberty, to purchase a little temporary
Safety, deserve neither Liberty nor Safety.
-- Benjamin Franklin
"carpe diem quam minimum credula postero"


Re: Retrieving data between two timestamps

2013-01-15 Thread Renato Marroquín Mogrovejo
Hi Aaron,

Thanks for answering! Yeah that is what I did but then when looking
into the actual column family created I saw this timestamp column
which Cassandra had created. Are we allowed to use this? What is this
specifically for?
Thanks again for the help!


Renato M.

2013/1/15 Aaron Turner :
> I don't think so.  Usually you'd use either a Time-UUID or something
> like epoch time as the column name to get a range of columns by time
> range.
>
> On Tue, Jan 15, 2013 at 10:46 AM, Renato Marroquín Mogrovejo
>  wrote:
>> Hi all,
>>
>> I am having some problems while retrieving some events from a column
>> family I have created.
>> My column family has been created as follows:
>>
>> create column family click_event
>>   WITH comparator = UTF8Type and
>>   column_metadata = [ {column_name: event, validation_class: UTF8Type} ];
>>
>> My table is populated as follows:
>>
>>  list click_events;
>> ---
>> => (column=start:2013-01-13 18:14:59.244, value=, timestamp=1358118943979000)
>> => (column=stop:2013-01-13 18:15:56.793,
>> value=323031332d30312d31332031383a31353a35382e333437,
>> timestamp=1358118960946000)
>>
>> I have two questions here:
>> 1) What is the timestamp column used for?
>> 2) How can I retrieve this timestamp column using Hector client?
>>
>> Thanks in advance!
>>
>>
>> Renato M.
>
>
>
> --
> Aaron Turner
> http://synfin.net/ Twitter: @synfinatic
> http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & 
> Windows
> Those who would give up essential Liberty, to purchase a little temporary
> Safety, deserve neither Liberty nor Safety.
> -- Benjamin Franklin
> "carpe diem quam minimum credula postero"


Re: Retrieving data between two timestamps

2013-01-15 Thread Aaron Turner
The timestamp is the time the record was inserted into the Cassandra
node.  It's used for conflict resolution, so if two clients insert
different data into the same row/column, Cassandra can pick the
"winner" (most recent timestamp).

You can set it manually on insert, otherwise the node will pick the
current time for you (this is a major reason why you want all your
Cassandra nodes clocks synchronized via NTP by the way).  It's also
available to be read, but I don't recall any API available (Hector or
otherwise) which allows you to search based on the timestamp value.

On Tue, Jan 15, 2013 at 11:51 AM, Renato Marroquín Mogrovejo
 wrote:
> Hi Aaron,
>
> Thanks for answering! Yeah that is what I did but then when looking
> into the actual column family created I saw this timestamp column
> which Cassandra had created. Are we allowed to use this? What is this
> specifically for?
> Thanks again for the help!
>
>
> Renato M.
>
> 2013/1/15 Aaron Turner :
>> I don't think so.  Usually you'd use either a Time-UUID or something
>> like epoch time as the column name to get a range of columns by time
>> range.
>>
>> On Tue, Jan 15, 2013 at 10:46 AM, Renato Marroquín Mogrovejo
>>  wrote:
>>> Hi all,
>>>
>>> I am having some problems while retrieving some events from a column
>>> family I have created.
>>> My column family has been created as follows:
>>>
>>> create column family click_event
>>>   WITH comparator = UTF8Type and
>>>   column_metadata = [ {column_name: event, validation_class: UTF8Type} ];
>>>
>>> My table is populated as follows:
>>>
>>>  list click_events;
>>> ---
>>> => (column=start:2013-01-13 18:14:59.244, value=, 
>>> timestamp=1358118943979000)
>>> => (column=stop:2013-01-13 18:15:56.793,
>>> value=323031332d30312d31332031383a31353a35382e333437,
>>> timestamp=1358118960946000)
>>>
>>> I have two questions here:
>>> 1) What is the timestamp column used for?
>>> 2) How can I retrieve this timestamp column using Hector client?
>>>
>>> Thanks in advance!
>>>
>>>
>>> Renato M.
>>
>>
>>
>> --
>> Aaron Turner
>> http://synfin.net/ Twitter: @synfinatic
>> http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & 
>> Windows
>> Those who would give up essential Liberty, to purchase a little temporary
>> Safety, deserve neither Liberty nor Safety.
>> -- Benjamin Franklin
>> "carpe diem quam minimum credula postero"



-- 
Aaron Turner
http://synfin.net/ Twitter: @synfinatic
http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows
Those who would give up essential Liberty, to purchase a little temporary
Safety, deserve neither Liberty nor Safety.
-- Benjamin Franklin
"carpe diem quam minimum credula postero"


Re: Retrieving data between two timestamps

2013-01-15 Thread Renato Marroquín Mogrovejo
Thanks for the explanation Aaron!


Renato M.

2013/1/15 Aaron Turner :
> The timestamp is the time the record was inserted into the Cassandra
> node.  It's used for conflict resolution, so if two clients insert
> different data into the same row/column, Cassandra can pick the
> "winner" (most recent timestamp).
>
> You can set it manually on insert, otherwise the node will pick the
> current time for you (this is a major reason why you want all your
> Cassandra nodes clocks synchronized via NTP by the way).  It's also
> available to be read, but I don't recall any API available (Hector or
> otherwise) which allows you to search based on the timestamp value.
>
> On Tue, Jan 15, 2013 at 11:51 AM, Renato Marroquín Mogrovejo
>  wrote:
>> Hi Aaron,
>>
>> Thanks for answering! Yeah that is what I did but then when looking
>> into the actual column family created I saw this timestamp column
>> which Cassandra had created. Are we allowed to use this? What is this
>> specifically for?
>> Thanks again for the help!
>>
>>
>> Renato M.
>>
>> 2013/1/15 Aaron Turner :
>>> I don't think so.  Usually you'd use either a Time-UUID or something
>>> like epoch time as the column name to get a range of columns by time
>>> range.
>>>
>>> On Tue, Jan 15, 2013 at 10:46 AM, Renato Marroquín Mogrovejo
>>>  wrote:
 Hi all,

 I am having some problems while retrieving some events from a column
 family I have created.
 My column family has been created as follows:

 create column family click_event
   WITH comparator = UTF8Type and
   column_metadata = [ {column_name: event, validation_class: UTF8Type} ];

 My table is populated as follows:

  list click_events;
 ---
 => (column=start:2013-01-13 18:14:59.244, value=, 
 timestamp=1358118943979000)
 => (column=stop:2013-01-13 18:15:56.793,
 value=323031332d30312d31332031383a31353a35382e333437,
 timestamp=1358118960946000)

 I have two questions here:
 1) What is the timestamp column used for?
 2) How can I retrieve this timestamp column using Hector client?

 Thanks in advance!


 Renato M.
>>>
>>>
>>>
>>> --
>>> Aaron Turner
>>> http://synfin.net/ Twitter: @synfinatic
>>> http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & 
>>> Windows
>>> Those who would give up essential Liberty, to purchase a little temporary
>>> Safety, deserve neither Liberty nor Safety.
>>> -- Benjamin Franklin
>>> "carpe diem quam minimum credula postero"
>
>
>
> --
> Aaron Turner
> http://synfin.net/ Twitter: @synfinatic
> http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & 
> Windows
> Those who would give up essential Liberty, to purchase a little temporary
> Safety, deserve neither Liberty nor Safety.
> -- Benjamin Franklin
> "carpe diem quam minimum credula postero"


Invalid streamId in cql binary protocol when using invalid CL

2013-01-15 Thread Pierre Chalamet
Hello,

 

Executing a query using an invalid CL with binary protocol leads to an
invalid response streamId (always 0).

I've created following issue then:
https://issues.apache.org/jira/browse/CASSANDRA-5164

 

Thanks,

- Pierre

 



How can OpsCenter show me Read Request Latency where there are no read requests??

2013-01-15 Thread Brian Tarbox
I am making heavy use of DataStax OpsCenter to help tune my system and its
great.

And yet puzzling.  I see my clients do a burst of Reads causing the
OpsCenter Read Requests chart to go up and stay up until the clients finish
doing their reads.  The read request latency chart also goes upbut it
stays up even after all the reads are done.  At last glance I've had next
to zero reads for 10 minutes but still have a read request latency thats
basically unchanged from when there were actual reads.

How am I to interpret this?

Thanks.

Brian Tarbox


Re: How can OpsCenter show me Read Request Latency where there are no read requests??

2013-01-15 Thread Mikhail Panchenko
I haven't used OpsCenter specifically, so this is a guess: often latency
metrics are based on the last N samples and what is graphed are percentiles
(as opposed to a sliding time window). That means that the graph will
remain the same until more requests occur, as the data that the percentiles
are calculated from is not changing (i.e. there are no new samples of
latency being added). If that's not it, then I don't know :D


On Tue, Jan 15, 2013 at 7:28 PM, Brian Tarbox wrote:

> I am making heavy use of DataStax OpsCenter to help tune my system and its
> great.
>
> And yet puzzling.  I see my clients do a burst of Reads causing the
> OpsCenter Read Requests chart to go up and stay up until the clients finish
> doing their reads.  The read request latency chart also goes upbut it
> stays up even after all the reads are done.  At last glance I've had next
> to zero reads for 10 minutes but still have a read request latency thats
> basically unchanged from when there were actual reads.
>
> How am I to interpret this?
>
> Thanks.
>
> Brian Tarbox
>