Hi,

no errors in the server logs. The columns are unreadable on all nodes at
any consistency level (ONE, QUORUM, ALL). We started with 0.7.3 and
upgraded to 0.7.6-2 two months ago.

Best,

Thomas

On 10/10/2011 10:03 AM, aaron morton wrote:
> What error are you seeing  in the server logs ? Are the columns unreadable at 
> all Consistency Levels ? i.e. are the columns unreadable on all nodes.
> 
> What is the upgrade history of the cluster ? What version did it start at ? 
> 
> Cheers
> 
> 
> -----------------
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
> 
> On 10/10/2011, at 7:42 AM, Thomas Richter wrote:
> 
>> Hi,
>>
>> here is some further information. Compaction did not help, but data is
>> still there when I dump the row with sstable2json.
>>
>> Best,
>>
>> Thomas
>>
>> On 10/08/2011 11:30 PM, Thomas Richter wrote:
>>> Hi,
>>>
>>> we are running a 3 node cassandra (0.7.6-2) cluster and some of our
>>> column families contain quite large rows (400k+ columns, 4-6GB row size).
>>> Replicaton factor is 3 for all keyspaces. The cluster is running fine
>>> for several months now and we never experienced any serious trouble.
>>>
>>> Some days ago we noticed, that some previously written columns could not
>>> be read. This does not always happen, and only some dozen columns out of
>>> 400k are affected.
>>>
>>> After ruling out application logic as a cause I dumped the row in
>>> question with sstable2json and the columns are there (and are not marked
>>> for deletion).
>>>
>>> Next thing was setting up a fresh single node cluster and copying the
>>> column family data to that node. Columns could not be read either.
>>> Right now I'm running a nodetool compact for the cf to see if data could
>>> be read afterwards.
>>>
>>> Is there any explanation for such behavior? Are there any suggestions
>>> for further investigation?
>>>
>>> TIA,
>>>
>>> Thomas
>>
> 

Reply via email to