>> I would first eliminate or confirm any GC hypothesis by running all
>> nodes with -XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
>> -XX:+PrintGCDateStamps.
>
> Is full GC not being logged through GCInspector with the defaults ?
The GC inspect tries its best, but it's polling. Unfortuna
>
> Ah, I keep always assuming random partitions since it is a very common
> case (just to be sure: unless you specifically want the ordering
> despite the downsides, you generally want to default to the random
> partitioner).
>
Yes, I'm working on geographical data so everything is keyed by a
deri
> Regarding 2), I may be running into this since data updates are very
> localized by design. I've distributed the keys per storage load but I'm
> going to have to distribute them by read/write load since the workload is
> all but random and I'm using BOP. However, I never see an IO bottle neck
> w
Interesting development : I changed the maximum size of the batches in
"Process A" to get them to go from about 90 per execute() to about 35. All
the weird 28s/38s maximum execution times are gone, all timeouts are gone
and everything is zipping along just fine. So moral of the story for me is
: on
Hi Peter,
I'm going to mix the response to your email along with my other email from
yesterday since they pertain to the same issue.
Sorry this is a little long, but I'm stomped and I'm trying to describe
what I've investigated.
In a nutshell, in case someone has encountered this and won't read it
> Counter increment is a special case in cassandra because the incur a local
> read before write. Normal column writes so not do this. So counter writes
> are intensive. If possible batch up the increments for less rpc calls and
> less reads.
Note though that the CPU usage impact of this should be
Counter increment is a special case in cassandra because the incur a local
read before write. Normal column writes so not do this. So counter writes
are intensive. If possible batch up the increments for less rpc calls and
less reads.
On Saturday, December 10, 2011, Peter Schuller
wrote:
>> I've
> I've got a batch process running every so often that issues a bunch of
> counter increments. I have noticed that when this process runs without being
> throttled it will raise the CPU to 80-90% utilization on the nodes handling
> the requests. This in turns timeouts and general lag on queries run
Hello,
I've got a batch process running every so often that issues a bunch of
counter increments. I have noticed that when this process runs without
being throttled it will raise the CPU to 80-90% utilization on the nodes
handling the requests. This in turns timeouts and general lag on queries
runn