;
> Eric
>
>
>
> *De :* DuyHai Doan [mailto:doanduy...@gmail.com]
> *Envoyé :* mercredi 24 septembre 2014 00:10
>
> *À :* user@cassandra.apache.org
> *Objet :* Re: CPU consumption of Cassandra
>
>
>
> Nice catch Daniel. The comment from Sylvain explains a l
throughput.
Regards,
Eric
De : DuyHai Doan [mailto:doanduy...@gmail.com]
Envoyé : mercredi 24 septembre 2014 00:10
À : user@cassandra.apache.org
Objet : Re: CPU consumption of Cassandra
Nice catch Daniel. The comment from Sylvain explains a lot !
On Tue, Sep 23, 2014 at 11:33 PM, Daniel Chia
t 1" and "select * from
>>> owner_to_buckets where owner = ? and tenantid = ? limit 10").
>>> Does cassandra perform extra read when the limit is bigger than the
>>> available data (even if the partition key contains only one single value in
>>> t
d tenantid = ? limit 10").
>> Does cassandra perform extra read when the limit is bigger than the
>> available data (even if the partition key contains only one single value in
>> the clustering column) ?
>> If the amount of data is the same, how can we explain the difference of
>> CPU consumption?
>&g
gle value in
> the clustering column) ?
> If the amount of data is the same, how can we explain the difference of
> CPU consumption?
>
>
> Regards,
> Eric
>
> ____________________
> De : Chris Lohfink [clohf...@blackbirdit.com]
> Date d'env
> De : Chris Lohfink [clohf...@blackbirdit.com]
> Date d'envoi : mardi 23 septembre 2014 19:23
> À : user@cassandra.apache.org
> Objet : Re: CPU consumption of Cassandra
>
> Well, first off you shouldn't run stress tool on the node your testing. Give
__
De : Chris Lohfink [clohf...@blackbirdit.com]
Date d'envoi : mardi 23 septembre 2014 19:23
À : user@cassandra.apache.org
Objet : Re: CPU consumption of Cassandra
Well, first off you shouldn't run stress tool on the node your testing. Give
it its own box.
With RF=N=2 your e
> and can easily exceed 200 entries. According to the Cassandra documentation,
> collections have a size limited to 64KB. So it is probably not a solution in
> my case. L
>
>
> Regards,
> Eric
>
> De : Chris Lohfink [mailto:clohf...@blackbirdit.com]
> Envo
tions have a size limited to 64KB. So it is probably not a solution in my
case. :(
Regards,
Eric
De : Chris Lohfink [mailto:clohf...@blackbirdit.com]
Envoyé : lundi 22 septembre 2014 22:03
À : user@cassandra.apache.org
Objet : Re: CPU consumption of Cassandra
Its going to depend a lot on your d
Eric,
We have a new stress tool to help you share your schema for wider bench
marking. see
http://www.datastax.com/dev/blog/improved-cassandra-2-1-stress-tool-benchmark-any-schema
If you wouldn't mind creating a yaml for your schema I would be happy to
take a look.
-Jake
On Mon, Sep 22, 2014
Its going to depend a lot on your data model but 5-6k is on the low end of what
I would expect. N=RF=2 is not really something I would recommend. That said
93GB is not much data so the bottleneck may exist more in your data model,
queries, or client.
What profiler are you using? The cpu on t
11 matches
Mail list logo