Thanks.

We weren't monitoring this value when the issue occurred, and this particular 
issue has not appeared for a couple of days (knock on wood).  Will keep an eye 
out though,

-Mike

On Apr 26, 2013, at 5:32 AM, Jason Wee wrote:

> top command? st : time stolen from this vm by the hypervisor
> 
> jason
> 
> 
> On Fri, Apr 26, 2013 at 9:54 AM, Michael Theroux <mthero...@yahoo.com> wrote:
> Sorry, Not sure what CPU steal is :)
> 
> I have AWS console with detailed monitoring enabled... things seem to track 
> close to the minute, so I can see the CPU load go to 0... then jump at about 
> the minute Cassandra reports the dropped messages,
> 
> -Mike
> 
> On Apr 25, 2013, at 9:50 PM, aaron morton wrote:
> 
>>> The messages appear right after the node "wakes up".
>> Are you tracking CPU steal ? 
>> 
>> -----------------
>> Aaron Morton
>> Freelance Cassandra Consultant
>> New Zealand
>> 
>> @aaronmorton
>> http://www.thelastpickle.com
>> 
>> On 25/04/2013, at 4:15 AM, Robert Coli <rc...@eventbrite.com> wrote:
>> 
>>> On Wed, Apr 24, 2013 at 5:03 AM, Michael Theroux <mthero...@yahoo.com> 
>>> wrote:
>>>> Another related question.  Once we see messages being dropped on one node, 
>>>> our cassandra client appears to see this, reporting errors.  We use 
>>>> LOCAL_QUORUM with a RF of 3 on all queries.  Any idea why clients would 
>>>> see an error?  If only one node reports an error, shouldn't the 
>>>> consistency level prevent the client from seeing an issue?
>>> 
>>> If the client is talking to a broken/degraded coordinator node, RF/CL
>>> are unable to protect it from RPCTimeout. If it is unable to
>>> coordinate the request in a timely fashion, your clients will get
>>> errors.
>>> 
>>> =Rob
>> 
> 
> 

Reply via email to