Thanks for tracking that down!
0.7 OPP adds additional checks (and if you're starting from scratch
you should use BOP instead) that keys are valid UTF8, so it shouldn't
be an issue there.
On Mon, May 2, 2011 at 7:39 AM, Daniel Doubleday
wrote:
> Just for the record:
>
> The problem had nothing t
Just for the record:
The problem had nothing to do with bad memory. After some more digging it
turned out that due to a bug we wrote invalid utf-8 sequences as row keys. In
0.6 the key tokens are constructed from string decoded bytes. This does not
happen anymore in 0.7 files. So what apparentl
Bad == Broken
That means you cannot rely on 1 == 1. In such a scenario everything can happen
including data loss.
That's why you want ECC mem on production servers. Our cheapo dev boxes dont.
On Apr 28, 2011, at 7:46 PM, mcasandra wrote:
> What do you mean by Bad memory? Is it less heap size,
What do you mean by Bad memory? Is it less heap size, OOM issues or something
else? What happens in such scenario, is there a data loss?
Sorry for many questions just trying to understand since data is critical
afterall :)
--
View this message in context:
http://cassandra-user-incubator-apache-o
When I have seen this in the past it has been bad memory on the server.
On Thu, Apr 28, 2011 at 11:58 AM, Daniel Doubleday
wrote:
> Hi all
> on one of our dev machines we ran into this:
> INFO [CompactionExecutor:1] 2011-04-28 15:07:35,174 SSTableWriter.java (line
> 108) Last written key : Decora
Can someone please help understand the reason for corrupt SSTables? I am just
worried what the worst case. Do we lose data in these cases? How to protect
from data loss if that's the case.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Strange-co