An if you have 10 nodes, do all of them happen to send hints to the two with
GC?

Terje

On Thu, May 12, 2011 at 6:10 PM, Terje Marthinussen <tmarthinus...@gmail.com
> wrote:

> Just out of curiosity is this on the receiver or sender side?
>
> I have been wondering a bit if the hint playback could need some
> adjustment.
> There is potentially quite big differences on how much is sent per throttle
> delay time depending on what your data looks like.
>
> Early 0.7 releases also built up hints very easily under load due to nodes
> quickly getting marked as down due to gossip sharing the same thread as many
> other operations.
>
> Terje
>
> On Thu, May 12, 2011 at 1:28 PM, Jonathan Ellis <jbel...@gmail.com> wrote:
>
>> Doesn't really look abnormal to me for a heavy write load situation
>> which is what "receiving hints" is.
>>
>> On Wed, May 11, 2011 at 1:55 PM, Gabriel Tataranu <gabr...@wajam.com>
>> wrote:
>> > Greetings,
>> >
>> > I'm experiencing some issues with 2 nodes (out of more than 10). Right
>> > after startup (Listening for thrift clients...) the nodes will create
>> > objects at high rate using all available CPU cores:
>> >
>> >  INFO 18:13:15,350 GC for PS Scavenge: 292 ms, 494902976 reclaimed
>> > leaving 2024909864 used; max is 6658457600
>> >  INFO 18:13:20,393 GC for PS Scavenge: 252 ms, 478691280 reclaimed
>> > leaving 2184252600 used; max is 6658457600
>> > ....
>> >  INFO 18:15:23,909 GC for PS Scavenge: 283 ms, 452943472 reclaimed
>> > leaving 5523891120 used; max is 6658457600
>> >  INFO 18:15:24,912 GC for PS Scavenge: 273 ms, 466157568 reclaimed
>> > leaving 5594606128 used; max is 6658457600
>> >
>> > This will eventually trigger old-gen GC and then the process repeats
>> > until hinted handoff finishes.
>> >
>> > The build version was updated from 0.7.2 to 0.7.5 but the behavior was
>> > exactly the same.
>> >
>> > Thank you.
>> >
>> >
>>
>>
>>
>> --
>> Jonathan Ellis
>> Project Chair, Apache Cassandra
>> co-founder of DataStax, the source for professional Cassandra support
>> http://www.datastax.com
>>
>
>

Reply via email to