> I've implemented this with MySQL before, and it worked extremely well
> (miles beyond mysqldump or mysqlhotcopy). On a given node, you sacrifice a
> short period of availability (less than 0.5 seconds) to get a full,
> consistent snapshot of your EBS volume that can be sent off to S3 in the
> ba
> Do you understand you are assuming there have been no compactions,
> which would be extremely bad practice given this number of SSTables?
> A major compaction, as would be best practice given this volume, would
> result in 1 SSTable per CF per node. One. Similarly, you are
> assuming the update
We started seeing for the first time errors that look like this last night
after things were running smoothly for a couple days after switching to
0.6-rc1:
ERROR [pool-1-thread-60] 2010-04-13 04:58:11,806 Cassandra.java (line 1492)
Internal error processing insert
We had not made any other change
Nope it's always been random.
On Fri, Mar 26, 2010 at 2:13 PM, Jonathan Ellis wrote:
> Did you switch partitioner types at some point?
>
> On Fri, Mar 26, 2010 at 2:53 PM, Scott White wrote:
> > I don't know if this is from switching from 0.5 to 0.6-betarc3 just
>
Right, that's what I meant, thanks for the correction.
On Fri, Mar 26, 2010 at 1:11 PM, Brandon Williams wrote:
> On Fri, Mar 26, 2010 at 3:08 PM, Scott White wrote:
>
>> Yep I believe those are inserts per second. Take the last line:
>>
>> "811653,1666,250&q
Yep I believe those are inserts per second. Take the last line:
"811653,1666,250"
I believe that's telling you that during that 10 second interval you did
1666 inserts but your overall insert rate is 811653/250 = 3246.612
inserts/sec.
Timeouts may be due to your machine(s) being fully saturated?
I don't know if this is from switching from 0.5 to 0.6-betarc3 just recently
or from doing a series of bootstrap and removeToken operations but I
recently started getting ArrayIndexOutOfBoundsException exceptions (centered
around reading UTF from SSTableSliceIterator) on one of the machines in my
c
Not that this is much better, but can't you replace steps 1-2 with nodeprobe
-flush ?
On Wed, Mar 24, 2010 at 2:03 AM, Ran Tavory wrote:
> What's the recommended way to delete data?
> For example, I want to wipe out an entire column family data from disk with
> minimal effort.
> I could:
>
>