Thanks Eric. Is there a way to start manually compaction operations?
I'm thinking about doing after loading data and before start run phase of
the benchmark.
Thanks.

Att.

*Rodrigo Felix de Almeida*
LSBD - Universidade Federal do CearĂ¡
Project Manager
MBA, CSM, CSPO, SCJP


On Mon, Jun 17, 2013 at 12:41 PM, Eric Stevens <migh...@gmail.com> wrote:

> Load is the size of the storage on disk as I understand it.  This can
> fluctuate during normal usage even if records are not being added or
> removed, a node's load may be reduced during compaction for example.
>  During compaction, especially if you use Size Tiered Compaction strategy
> (the default), load may temporarily double for a column family.
>
>
> On Mon, Jun 17, 2013 at 11:33 AM, Rodrigo Felix <
> rodrigofelixdealme...@gmail.com> wrote:
>
>> Hi,
>>
>>    I've been running a benchmark on Cassandra and I'm facing a problem
>> regarding to the size of the database.
>>    I performed a load phase and then, when running nodetool ring, I got
>> the following output:
>>
>> *ubuntu@domU-12-31-39-0E-11-F1:~/cassandra$ bin/nodetool ring *
>> *Address         DC          Rack        Status State   Load
>>  Effective-Ownership Token                                       *
>> *
>>                    85070591730234615865843651857942052864      *
>> *10.192.18.3     datacenter1 rack1       Up     Normal  2.07 GB
>> 50.00%              0                                           *
>> *10.85.135.169   datacenter1 rack1       Up     Normal  2.09 GB
>> 50.00%              85070591730234615865843651857942052864*
>>
>>    After that I executed, for about one hour, a workload with scan and
>> insert queries. Then, after finishing the workload execution, I run again
>> nodetool ring and got the following:
>>
>> *ubuntu@domU-12-31-39-0E-11-F1:~/cassandra$ bin/nodetool ring *
>> *Address         DC          Rack        Status State   Load
>>  Effective-Ownership Token                                       *
>> *
>>                    85070591730234615865843651857942052864      *
>> *10.192.18.3     datacenter1 rack1       Up     Normal  1.07 GB
>> 50.00%              0                                           *
>> *10.85.135.169   datacenter1 rack1       Up     Normal  2.15 GB
>> 50.00%              85070591730234615865843651857942052864*
>>
>>    Any idea why a node had its size reduced if no record was removed? No
>> machine or added or removed during this workload.
>>    Is this related to any kind of compression? If yes, is there a command
>> to confirm that?
>>    I also faced a problem where a node has its size increased from about
>> 2gb to about 4gb. In this last scenario, I both added and removed nodes
>> during the workload depending on the load (CPU).
>>    Thanks in advance for any help.
>>
>>
>> Att.
>>
>> *Rodrigo Felix de Almeida*
>> LSBD - Universidade Federal do CearĂ¡
>> Project Manager
>> MBA, CSM, CSPO, SCJP
>>
>
>

Reply via email to