> Our application space is such that there is data that might not be read for
> a long time. The data is mostly immutable. How should I approach
> detecting/solving the bitrot problem? One approach is read data and let read
> repair do the detection, but given the size of data, that does not look very
> efficient.

Note that read-repair is not really intended to repair arbitrary
corruptions. Unless I'm mistaken, arbitrary corruption, unless it
triggers a serialization failure that causes row skipping, it's a
toss-up which version of the data is retained (or both, if the
corruption is in the key). Given the same key and column timestamp,
the tie breaker is the volumn value. So depending on whether
corruption results in a "lesser" or "greater" value, you might get the
corrupt or non-corrupt data.

> Has anybody solved/workaround this or has any other suggestions to detect
> and fix bitrot?

My feel/tentative opinion is that the clean fix is for Cassandra to
support strong checksumming at the sstable level.

Deploying on e.g. ZFS would help a lot with this, but that's a problem
for deployment on Linux (which is the recommended platform for
Cassandra).

-- 
/ Peter Schuller

Reply via email to