Thanks Bowen.
Any idea why is cross_node_timeout commented out by default? That seems
like a good option to enable even as per the documentation:
# If disabled, replicas will assume that requests
# were forwarded to them instantly by the coordinator, which means that
# under overload conditions we
That will depend on whether you have cross_node_timeout enabled.
However, I have to point out that set timeout to 15ms is perhaps not a
good idea, the JVM GC can easily cause a lots of timeouts.
On 12/10/2021 18:20, S G wrote:
ok, when a coordinator node sends timeout to the client, does it mea
ok, when a coordinator node sends timeout to the client, does it mean all
the replica nodes have stopped processing that specific query too?
Or is it just the coordinator node that has stopped waiting for the
replicas to return response?
On Tue, Oct 12, 2021 at 10:12 AM Jeff Jirsa wrote:
> It se
It sends an exception to the client, it doesnt sever the connection.
On Tue, Oct 12, 2021 at 10:06 AM S G wrote:
> Do the timeout values only kill the connection with the client or send
> error to the client?
> Or do they also kill the corresponding query execution happening on the
> Cassandra
Do the timeout values only kill the connection with the client or send
error to the client?
Or do they also kill the corresponding query execution happening on the
Cassandra servers (co-ordinator, replicas etc) ?
On Tue, Oct 12, 2021 at 10:00 AM Jeff Jirsa wrote:
> The read and write timeout val
The read and write timeout values do this today.
https://github.com/apache/cassandra/blob/trunk/conf/cassandra.yaml#L920-L943
On Tue, Oct 12, 2021 at 9:53 AM S G wrote:
> Hello,
>
> Is there a way to stop long running queries in Cassandra (versions 3.11.x
> or 4.x) ?
> The use-case is to have
Hello,
Is there a way to stop long running queries in Cassandra (versions 3.11.x
or 4.x) ?
The use-case is to have some kind of a circuit breaker based on query-time
that has exceeded the client's SLAs.
Example: If server response is useless to the client after 10 ms, then we
could
have a *query_k
The most likely explanation is that repair failed and you didnt notice.
Or that you didnt actually repair every host / every range.
Which version are you using?
How did you run repair?
On Tue, Oct 12, 2021 at 4:33 AM Isaeed Mohanna wrote:
> Hi
>
> Yes I am sacrificing consistency to gain highe
Hi,
You could try to run full repair over short subrange containing data
missing from one replica. It should take just couple of minutes and will
prove if your repair failed to finish
Dmitrii Saprykin
On Tue, Oct 12, 2021 at 7:54 AM Bowen Song wrote:
> I see. In that case, I suspect the repair
I see. In that case, I suspect the repair wasn't fully successful. Try
repair the new joined node again, and make sure it actually finishes
successfully.
On 12/10/2021 12:23, Isaeed Mohanna wrote:
Hi
Yes I am sacrificing consistency to gain higher availability and
faster speed, but my probl
Hi
Yes I am sacrificing consistency to gain higher availability and faster speed,
but my problem is not with newly inserted data that is not there for a very
short period of time, my problem is the data that was there before the RF
change, still do not exist in all replicas even after repair.
It
11 matches
Mail list logo