That will depend on whether you have cross_node_timeout enabled.
However, I have to point out that set timeout to 15ms is perhaps not a
good idea, the JVM GC can easily cause a lots of timeouts.
On 12/10/2021 18:20, S G wrote:
ok, when a coordinator node sends timeout to the client, does it mean
all the replica nodes have stopped processing that specific query too?
Or is it just the coordinator node that has stopped waiting for the
replicas to return response?
On Tue, Oct 12, 2021 at 10:12 AM Jeff Jirsa <jji...@gmail.com> wrote:
It sends an exception to the client, it doesnt sever the connection.
On Tue, Oct 12, 2021 at 10:06 AM S G <sg.online.em...@gmail.com>
wrote:
Do the timeout values only kill the connection with the client
or send error to the client?
Or do they also kill the corresponding query execution
happening on the Cassandra servers (co-ordinator, replicas etc) ?
On Tue, Oct 12, 2021 at 10:00 AM Jeff Jirsa <jji...@gmail.com>
wrote:
The read and write timeout values do this today.
https://github.com/apache/cassandra/blob/trunk/conf/cassandra.yaml#L920-L943
On Tue, Oct 12, 2021 at 9:53 AM S G
<sg.online.em...@gmail.com> wrote:
Hello,
Is there a way to stop long running queries in
Cassandra (versions 3.11.x or 4.x) ?
The use-case is to have some kind of a circuit breaker
based on query-time that has exceeded the client's SLAs.
Example: If server response is useless to the client
after 10 ms, then we could
have a *query_killing_timeout* set to 15 ms (where
additional 5ms allows for some buffer).
And when that much time has elapsed, Cassandra will
kill the query execution automatically.
If this is not possible in Cassandra currently, any
chance we can do it outside of Cassandra, like
a shell script that monitors such long running queries
(through users table etc) and kills the
OS-thread responsible for that query (Looks unsafe
though as that might leave the DB in an inconsistent
state) ?
We are trying this as a proactive measure to safeguard
our clusters from any rogue queries fired accidentally
or maliciously.
Thanks !