Yes, please see http://wiki.apache.org/cassandra/FAQ#dropped_messages for
further details.
Mark
On Fri, May 9, 2014 at 12:52 PM, Raveendran, Varsha IN BLR STS <
varsha.raveend...@siemens.com> wrote:
> Hello,
>
> I am writing around 10Million records continuously into a single node
> Cassandra
Shameless plug:
http://www.evidencebasedit.com/guide-to-cassandra-thread-pools/#droppable
On May 15, 2014, at 7:37 PM, Mark Reddy wrote:
> Yes, please see http://wiki.apache.org/cassandra/FAQ#dropped_messages for
> further details.
>
>
> Mark
>
>
> On Fri, May 9, 2014 at 12:52 PM, Raveendr
It means asynchronous write mutations were dropped, but if the writes are
completing without TimedOutException, then at least ConsistencyLevel
replicas were correctly written. The remaining replicas will eventually be
fixed by hinted handoff, anti-entropy (repair) or read repair.
More info: http:/
> I ended up changing memtable_flush_queue_size to be large enough to contain
> the biggest flood I saw.
As part of the flush process the “Switch Lock” is taken to synchronise around
the commit log. This is a reentrant Read Write lock, the flush path takes the
write lock and write path takes the
I ended up changing memtable_flush_queue_size to be large enough to contain
the biggest flood I saw.
I monitored tpstats over time using a collection script and an analysis
script that I wrote to figure out what my largest peaks were. In my case,
all my mutation drops correlated with hitting the
I can't comment on your specific issue, but I don't know if running 2.0.0
in production is a good idea. At the very least I'd try upgrading to the
latest 2.0.X (currently 2.0.3.)
https://engineering.eventbrite.com/what-version-of-cassandra-should-i-run/
On 20 December 2013 06:08, Alexander Shuty
Thanks for you answers.
*srmore*,
We are using v2.0.0. As for GC I guess it does not correlate in our case,
because we had cassandra running 9 days under production load and no
dropped messages and I guess that during this time there were a lot of GCs.
*Ken*,
I've checked the values you indicat
We had issues where the number of CF families that were being flushed would
align and then block writes for a very brief period. If that happened when
a bunch of writes came in, we'd see a spike in Mutation drops.
Check nodetool tpstats for FlushWriter all time blocked.
On Thu, Dec 19, 2013 at 7
What version of Cassandra are you running ? I used to see them a lot with
1.2.9, I could correlate the dropped messages with the heap usage almost
every time, so check in the logs whether you are getting GC'd. In this
respect 1.2.12 appears to be more stable. Moving to 1.2.12 took care of
this for
+ I would also check the GC settings :) and full gc events in the logs.
Regards,
On Wed, Jan 25, 2012 at 9:52 AM, aaron morton wrote:
> Am I missing data here?
>
> Yes, but you can repair it with nodetool.
>
> Is this means that my cluster is too loaded?
>
> yes.
>
> http://spyced.
> Am I missing data here?
Yes, but you can repair it with nodetool.
> Is this means that my cluster is too loaded?
yes.
http://spyced.blogspot.com/2010/01/linux-performance-basics.html)
Use nodetool tpstats to see when tasks are backing up, check io throughput with
iostat (see
Cheers
---
11 matches
Mail list logo