Also we do subsequent updates (atleat 4) for each piece of data that we
write.


On Mon, Jul 28, 2014 at 10:36 AM, Ruchir Jha <ruchir....@gmail.com> wrote:

> Doing about 5K writes / second. Avg Data Size = 1.6 TB / node. Total Data
> Size = 21 TB.
>
> And this is the nodetool cfstats output for one of our busiest column
> families:
>
>   SSTable count: 10
>                 Space used (live): 43239294899
>                 Space used (total): 43239419603
>                 SSTable Compression Ratio: 0.2954468408497778
>                 Number of Keys (estimate): 63729152
>                 Memtable Columns Count: 1921620
>                 Memtable Data Size: 257680020
>                 Memtable Switch Count: 9
>                 Read Count: 6167
>                 Read Latency: NaN ms.
>                 Write Count: 770984
>                 Write Latency: 0.098 ms.
>                 Pending Tasks: 0
>                 Bloom Filter False Positives: 370
>                 Bloom Filter False Ratio: 0.00000
>                 Bloom Filter Space Used: 80103200
>                 Compacted row minimum size: 180
>                 Compacted row maximum size: 3311
>                 Compacted row mean size: 2631
>                 Average live cells per slice (last five minutes): 73.0
>                 Average tombstones per slice (last five minutes): 13.0
>
>
>
> On Mon, Jul 28, 2014 at 10:14 AM, Mark Reddy <mark.re...@boxever.com>
> wrote:
>
>> What is your data size and number of columns in Cassandra. Do you do many
>> deletions?
>>
>>
>> On Mon, Jul 28, 2014 at 2:53 PM, Ruchir Jha <ruchir....@gmail.com> wrote:
>>
>>> Really curious to know what's causing the spike in Columns and
>>> DeletedColums below :
>>>
>>>
>>> 2014-07-28T09:30:27.471-0400: 127335.928: [Full GC 127335.928: [Class
>>> Histogram:
>>>   num     #instances         #bytes  class name
>>> ----------------------------------------------
>>>    1:     132626060     6366050880  java.nio.HeapByteBuffer
>>>    2:      28194918     3920045528  [B
>>>    3:      78124737     3749987376
>>>  edu.stanford.ppl.concurrent.SnapTreeMap$Node
>>> *   4:      67650128     2164804096
>>> <2164804096>  org.apache.cassandra.db.Column*
>>> *   5:      16315310      522089920
>>>  org.apache.cassandra.db.DeletedColumn*
>>>    6:          6818      392489608  [I
>>>    7:       2844374      273059904
>>>  edu.stanford.ppl.concurrent.CopyOnWriteManager$COWEpoch
>>>    8:       5727000      229080000  java.util.TreeMap$Entry
>>>    9:        767742      182921376  [J
>>>   10:       2932832      140775936
>>>  edu.stanford.ppl.concurrent.SnapTreeMap$RootHolder
>>>   11:       2844375       91020000
>>>  edu.stanford.ppl.concurrent.CopyOnWriteManager$Latch
>>>   12:       4145131       66322096
>>>  java.util.concurrent.atomic.AtomicReference
>>>   13:        437874       64072392  [C
>>>   14:       2660844       63860256
>>>  java.util.concurrent.ConcurrentSkipListMap$Node
>>>   15:          4920       62849864  [[B
>>>   16:       1632063       52226016
>>>  edu.stanford.ppl.concurrent.SnapTreeMap
>>>
>>
>>
>

Reply via email to