Yup... it seems like it's gc fault

gc logs

2015-07-21T14:19:54.336+0000: 2876133.270: Total time for which
application threads were stopped: 0.0832030 seconds
2015-07-21T14:19:55.739+0000: 2876134.673: Total time for which
application threads were stopped: 0.0806960 seconds
2015-07-21T14:19:57.149+0000: 2876136.083: Total time for which
application threads were stopped: 0.0806890 seconds
2015-07-21T14:19:58.550+0000: 2876137.484: Total time for which
application threads were stopped: 0.0821070 seconds
2015-07-21T14:19:59.941+0000: 2876138.875: Total time for which
application threads were stopped: 0.0802640 seconds
2015-07-21T14:20:01.340+0000: 2876140.274: Total time for which
application threads were stopped: 0.0835670 seconds
2015-07-21T14:20:02.744+0000: 2876141.678: Total time for which
application threads were stopped: 0.0842440 seconds
2015-07-21T14:20:04.143+0000: 2876143.077: Total time for which
application threads were stopped: 0.0841630 seconds
2015-07-21T14:20:05.541+0000: 2876144.475: Total time for which
application threads were stopped: 0.0839850 seconds

Heap after GC invocations=2273737 (full 101):
 par new generation   total 1474560K, used 106131K
[0x00000005fae00000, 0x000000065ee00000, 0x000000065ee00000)
  eden space 1310720K,   0% used [0x00000005fae00000,
0x00000005fae00000, 0x000000064ae00000)
  from space 163840K,  64% used [0x000000064ae00000,
0x00000006515a4ee0, 0x0000000654e00000)
  to   space 163840K,   0% used [0x0000000654e00000,
0x0000000654e00000, 0x000000065ee00000)
 concurrent mark-sweep generation total 6750208K, used 1316691K
[0x000000065ee00000, 0x00000007fae00000, 0x00000007fae00000)
 concurrent-mark-sweep perm gen total 49336K, used 29520K
[0x00000007fae00000, 0x00000007fde2e000, 0x0000000800000000)
}
2015-07-21T14:12:05.683+0000: 2875664.617: Total time for which
application threads were stopped: 0.0830280 seconds
{Heap before GC invocations=2273737 (full 101):
 par new generation   total 1474560K, used 1416851K
[0x00000005fae00000, 0x000000065ee00000, 0x000000065ee00000)
  eden space 1310720K, 100% used [0x00000005fae00000,
0x000000064ae00000, 0x000000064ae00000)
  from space 163840K,  64% used [0x000000064ae00000,
0x00000006515a4ee0, 0x0000000654e00000)
  to   space 163840K,   0% used [0x0000000654e00000,
0x0000000654e00000, 0x000000065ee00000)
 concurrent mark-sweep generation total 6750208K, used 1316691K
[0x000000065ee00000, 0x00000007fae00000, 0x00000007fae00000)
 concurrent-mark-sweep perm gen total 49336K, used 29520K
[0x00000007fae00000, 0x00000007fde2e000, 0x0000000800000000)

It seems like eden heap space is being constantly occupied by
something which is later removed by gc...


On Mon, Jul 20, 2015 at 9:18 AM, Jason Wee <peich...@gmail.com> wrote:
> just a guess, gc?
>
> On Mon, Jul 20, 2015 at 3:15 PM, Marcin Pietraszek <mpietras...@opera.com>
> wrote:
>>
>> Hello!
>>
>> I've noticed a strange CPU utilisation patterns on machines in our
>> cluster. After C* daemon restart it behaves in a normal way, after a
>> few weeks since a restart CPU usage starts to raise. Currently on one
>> of the nodes (screenshots attached) cpu load is ~4. Shortly before
>> restart load raises to ~15 (our cassandra machines have 16 cpus).
>>
>> In that cluster we're using bulkloading from hadoop cluster with 1400
>> reducers (200 parallel bulkloading tasks). After such session of heavy
>> bulkloading number of pending compactions is quite high but it's able
>> to clear them before next "bulkloading session". We're also tracking
>> number of pending compactions and during most of the time it's 0.
>>
>> On our machines we do have a few gigs of free memory ~7GB (17GB used),
>> also it seems like we aren't IO bound.
>>
>> Screenshots from our zabbix with CPU utilisation graphs:
>>
>> http://i60.tinypic.com/xas8q8.jpg
>> http://i58.tinypic.com/24pifcy.jpg
>>
>> Do you guys know what could be causing such high load?
>>
>> --
>> mp
>
>

Reply via email to