Row cache and counters

2012-12-29 Thread André Cruz
Hello.

I recently was having some timeout issues while updating counters and turned on 
row cache for that particular CF. This is its stats:

Column Family: UserQuotas
SSTable count: 3
Space used (live): 2687239
Space used (total): 2687239
Number of Keys (estimate): 22912
Memtable Columns Count: 25766
Memtable Data Size: 180975
Memtable Switch Count: 17
Read Count: 356900
Read Latency: 1.004 ms.
Write Count: 548996
Write Latency: 0.045 ms.
Pending Tasks: 0
Bloom Filter False Postives: 17
Bloom Filter False Ratio: 0.0
Bloom Filter Space Used: 44232
Compacted row minimum size: 125
Compacted row maximum size: 770
Compacted row mean size: 308

Since it is rather small I was hoping that it would eventually be all cached, 
and the timeouts would go away. I'm updating the counters with a CL of ONE, so 
I thought that the timeout would be caused by the read step and the cache would 
help here. But I still get timeouts, and the cache hit rate is rather low:

Row Cache: size 1436291 (bytes), capacity 524288000 (bytes), 125310 
hits, 442760 requests, 0.247 recent hit rate, 0 save period in seconds

Am I assuming something wrong about the row cache? Isn't it updated when a 
counter update occurs or is just invalidated?

Best regards,
André Cruz

Re: Row cache and counters

2012-12-29 Thread rohit bhatia
Reads during a write still occur during a counter increment with CL ONE,
but that latency is not counted in the request latency for the write. Your
local node write latency of 45 microseconds is pretty quick. what is your
timeout and the write request latency you see. In our deployment we had
some issues and we could trace the timeouts to parnew gc collections which
were quite frequent. You might just want to take a look there too.


On Sat, Dec 29, 2012 at 4:44 PM, André Cruz  wrote:

> Hello.
>
> I recently was having some timeout issues while updating counters and
> turned on row cache for that particular CF. This is its stats:
>
> Column Family: UserQuotas
> SSTable count: 3
> Space used (live): 2687239
> Space used (total): 2687239
> Number of Keys (estimate): 22912
> Memtable Columns Count: 25766
> Memtable Data Size: 180975
> Memtable Switch Count: 17
> Read Count: 356900
> Read Latency: 1.004 ms.
> Write Count: 548996
> Write Latency: 0.045 ms.
> Pending Tasks: 0
> Bloom Filter False Postives: 17
> Bloom Filter False Ratio: 0.0
> Bloom Filter Space Used: 44232
> Compacted row minimum size: 125
> Compacted row maximum size: 770
> Compacted row mean size: 308
>
> Since it is rather small I was hoping that it would eventually be all
> cached, and the timeouts would go away. I'm updating the counters with a CL
> of ONE, so I thought that the timeout would be caused by the read step and
> the cache would help here. But I still get timeouts, and the cache hit rate
> is rather low:
>
> Row Cache: size 1436291 (bytes), capacity 524288000 (bytes),
> 125310 hits, 442760 requests, 0.247 recent hit rate, 0 save period in
> seconds
>
> Am I assuming something wrong about the row cache? Isn't it updated when a
> counter update occurs or is just invalidated?
>
> Best regards,
> André Cruz


Re: Row cache and counters

2012-12-29 Thread André Cruz
On 29/12/2012, at 16:59, rohit bhatia  wrote:

> Reads during a write still occur during a counter increment with CL ONE, but 
> that latency is not counted in the request latency for the write. Your local 
> node write latency of 45 microseconds is pretty quick. what is your timeout 
> and the write request latency you see.

Most of the time the increments are pretty quick, in the millisecond range. I 
have a 8s timeout and sometimes timeouts happen in bursts.  

> In our deployment we had some issues and we could trace the timeouts to 
> parnew gc collections which were quite frequent. You might just want to take 
> a look there too.

What can we do about that? Which settings did you tune?

Thanks,
André

Column Family migration/tombstones

2012-12-29 Thread Mike

Hello,

We are undergoing a change to our internal datamodel that will result in 
the eventual deletion of over a hundred million rows from a Cassandra 
column family.  From what I understand, this will result in the 
generation of tombstones, which will be cleaned up during compaction, 
after gc_grace_period time (default: 10 days).


A couple of questions:

1) As one can imagine, the index and bloom filter for this column family 
is large.  Am I correct to assume that bloom filter and index space will 
not be reduced until after gc_grace_period?


2) If I would manually run repair across a cluster, is there a process I 
can use to safely remove these tombstones before gc_grace period to free 
this memory sooner?


3) Any words of warning when undergoing this?

We are running Cassandra 1.1.2 on a 6 node cluster and a Replication 
Factor of 3.  We use LOCAL_QUORM consistency for all operations.


Thanks!
-Mike


Re: Row cache and counters

2012-12-29 Thread rohit bhatia
i assume u mean 8 seconds and not 8ms..
thats pretty huge to be caused by gc. Is there lot of load on your servers?
You might also need to check for memory contention

Regarding GC, since its parnew all u can really do is increase heap and
young gen size, or modify tenuring rate. But that can't be the reason for a
8 second timeout.


On Sat, Dec 29, 2012 at 11:37 PM, André Cruz  wrote:

> On 29/12/2012, at 16:59, rohit bhatia  wrote:
>
> Reads during a write still occur during a counter increment with CL ONE,
> but that latency is not counted in the request latency for the write. Your
> local node write latency of 45 microseconds is pretty quick. what is your
> timeout and the write request latency you see.
>
>
> Most of the time the increments are pretty quick, in the millisecond
> range. I have a 8s timeout and sometimes timeouts happen in bursts.
>
> In our deployment we had some issues and we could trace the timeouts to
> parnew gc collections which were quite frequent. You might just want to
> take a look there too.
>
>
> What can we do about that? Which settings did you tune?
>
> Thanks,
> André
>


Re: Row cache and counters

2012-12-29 Thread Mohit Anchlia
Can you post gc settings? Also check logs and see what it says

Also post how many writes and reads along with avg row size

Sent from my iPhone

On Dec 29, 2012, at 12:28 PM, rohit bhatia  wrote:

> i assume u mean 8 seconds and not 8ms.. 
> thats pretty huge to be caused by gc. Is there lot of load on your servers? 
> You might also need to check for memory contention
> 
> Regarding GC, since its parnew all u can really do is increase heap and young 
> gen size, or modify tenuring rate. But that can't be the reason for a 8 
> second timeout.
> 
> 
> On Sat, Dec 29, 2012 at 11:37 PM, André Cruz  wrote:
>> On 29/12/2012, at 16:59, rohit bhatia  wrote:
>> 
>>> Reads during a write still occur during a counter increment with CL ONE, 
>>> but that latency is not counted in the request latency for the write. Your 
>>> local node write latency of 45 microseconds is pretty quick. what is your 
>>> timeout and the write request latency you see.
>> 
>> Most of the time the increments are pretty quick, in the millisecond range. 
>> I have a 8s timeout and sometimes timeouts happen in bursts.  
>> 
>>> In our deployment we had some issues and we could trace the timeouts to 
>>> parnew gc collections which were quite frequent. You might just want to 
>>> take a look there too.
>> 
>> What can we do about that? Which settings did you tune?
>> 
>> Thanks,
>> André
> 


Create Keyspace failing in 1.2rc2 with syntax error?

2012-12-29 Thread Adam Venturella
When I create a keyspace with a SimpleStrategy as outlined here:
https://cassandra.apache.org/doc/cql3/CQL.html#createKeyspaceStmt


CREATE KEYSPACE Test
   WITH strategy_class = SimpleStrategy
AND strategy_options:replication_factor = 1;


I receive the following error:

Bad Request: line 3:20 mismatched input ':' expecting '='


I'm running the following cqlsh:


Connected to Test Cluster at localhost:9160.

[cqlsh 2.3.0 | Cassandra 1.2.0~rc2 | CQL spec 3.0.0 | Thrift protocol 19.35.0]


Re: Create Keyspace failing in 1.2rc2 with syntax error?

2012-12-29 Thread Adam Venturella
Nevermind...

help CREATE_KEYSPACE; works wonders..

CREATE KEYSPACE 
WITH replication = {'class':'SimpleStrategy',
'replication_factor':3};

=)




On Sat, Dec 29, 2012 at 1:27 PM, Adam Venturella wrote:

> When I create a keyspace with a SimpleStrategy as outlined here:
> https://cassandra.apache.org/doc/cql3/CQL.html#createKeyspaceStmt
>
>
> CREATE KEYSPACE Test
>WITH strategy_class = SimpleStrategy
> AND strategy_options:replication_factor = 1;
>
>
> I receive the following error:
>
> Bad Request: line 3:20 mismatched input ':' expecting '='
>
>
> I'm running the following cqlsh:
>
>
> Connected to Test Cluster at localhost:9160.
>
> [cqlsh 2.3.0 | Cassandra 1.2.0~rc2 | CQL spec 3.0.0 | Thrift protocol 19.35.0]
>
>
>
>


Re: Create Keyspace failing in 1.2rc2 with syntax error?

2012-12-29 Thread Dave Brosius

the format has changed, check the help in cqlsh

CREATE KEYSPACE Test WITH replication = {'class':'SimpleStrategy', 
'replication_factor':1};


On 12/29/2012 04:27 PM, Adam Venturella wrote:


When I create a keyspace with a SimpleStrategy as outlined here: 
https://cassandra.apache.org/doc/cql3/CQL.html#createKeyspaceStmt



CREATE KEYSPACE Test
WITH strategy_class = SimpleStrategy
 AND strategy_options:replication_factor = 1;
I receive the following error:
Bad Request: line 3:20 mismatched input ':' expecting '='

I'm running the following cqlsh:

Connected to Test Cluster at localhost:9160.

[cqlsh 2.3.0 | Cassandra 1.2.0~rc2 | CQL spec 3.0.0 | Thrift protocol 19.35.0]