Hello,
Here's an example tpstats on one node in my cluster. I only issue
multigetslice reads to counter columns
Pool NameActive Pending Completed Blocked All
time blocked
ReadStage27 2166 3565927301 0
0
MutationStag
and rows forever stuck in HintsColumnFamily
You need to remove the hints data files to clear out the incomplete
hints from< 1.0.3;
I did. hints there are slowly increasing. I checked it today.
> Pool Name Active Pending Completed Blocked All
> time blocked
> ReadStage 27 2166 3565927301 0
In general, "active" refers to work that is being executed right now,
"pending" refers to work that is waiting to be executed (go in
> After re-reading my post, what I meant to say is that I switched from
> Serializing cache provider to ConcurrentLinkedHash cache provider and then
> saw better performance, but still far worse than no caching at all:
>
> - no caching at all : 25-30ms
> - with Serializing provider : 1300+ms
> - wi
> I've got a batch process running every so often that issues a bunch of
> counter increments. I have noticed that when this process runs without being
> throttled it will raise the CPU to 80-90% utilization on the nodes handling
> the requests. This in turns timeouts and general lag on queries run
There was a recent patch that fixed an issue where counters were hitting
the same natural endpoint rather then being randomized across all of them.
On Saturday, December 10, 2011, Peter Schuller
wrote:
>> Pool NameActive Pending Completed Blocked
All
>> time blocked
I am not sure if there is a ticket on this but I have always thought the
row cache should not bother caching an entry bigger then n columns.
Murmurs of a slice cache might help as well.
On Saturday, December 10, 2011, Peter Schuller
wrote:
>> After re-reading my post, what I meant to say is that
Counter increment is a special case in cassandra because the incur a local
read before write. Normal column writes so not do this. So counter writes
are intensive. If possible batch up the increments for less rpc calls and
less reads.
On Saturday, December 10, 2011, Peter Schuller
wrote:
>> I've
> Counter increment is a special case in cassandra because the incur a local
> read before write. Normal column writes so not do this. So counter writes
> are intensive. If possible batch up the increments for less rpc calls and
> less reads.
Note though that the CPU usage impact of this should be
you could try writing with the clock of the initial replay entry?
On 06/12/2011 20:26, John Laban wrote:
Ah, neat. It is similar to what was proposed in (4) above with adding
transactions to Cages, but instead of snapshotting the data to be
rolled back (the "before" data), you snapshot the dat
10 matches
Mail list logo