Hello,

I am testing how cassandra behaves on single node disk failures to know what to
expect when things go bad.
I had a cluster of 4 cassandra nodes, stress loaded it with client and made 2
tests:
1. emulated disk failure of /data volume on read only stress test
2. emulated disk failure of /commitlog volumn on write intensive test

1. On read test with data volume down, a lot of
"org.apache.thrift.TApplicationException: Internal error processing get_slice"
was logged at client side. On cassandra server logged alot of IOExceptions
reading every *.db file it has. Node continued to show as UP in ring.

OK, the behavior is not ideal, but still can be worked around at client side,
throwing out nodes as soon as TApplicationException is received from cassandra.

2. Much worse was with write test:
No exception was seen at client, writes are going through normally, but
PERIODIC-COMMIT-LOG-SYNCER failed to sync commit logs, heap of node quickly
became full and node freezed in GC loop. Still, it continued to show as UP in
ring.

This, i believe, is bad, because no quick workaround could be done at client
side (no exceptions are coming from failed node) and in real system will lead to
dramatic slow down of the whole cluster, because clients, not knowing, that node
is actually dead, will direct 1/4th of requests to it and timeout.

I think that more correct behavior here could be halting cassandra server on any
disk IO error, so clients can quickly detect this and failover to healthy
servers.

What do you think ?

Did you guys experienced disk failure in production and how was it ?


Reply via email to