Hi Joe, it looks like "PT2M" may refer to a timeout value that could be set by your Spark job's 
initialization of the client. I don't see a string matching this in the Cassandra codebase itself, but I do see 
that this is parseable as a Duration.```jshell> java.time.Duration.parse("PT2M").getSeconds()$7 
==> 120```The server-side log you see is likely an indicator of the timeout from the server's perspective. 
You might consider checking lots from the replicas for dropped reads, query aborts due to scanning more 
tombstones than the configured max, or other conditions indicating overload/inability to serve a response.If 
you're running a Spark job, I'd recommend using the DataStax Spark Cassandra Connector which distributes your 
query to executors addressing slices of the token range which will land on replica sets, avoiding the 
scatter-gather behavior that can occur if using the Java driver alone.Cheers,– ScottOn Feb 3, 2022, at 11:42 
AM, Joe Obernberger <joseph.obernber...@gmail.com> wrote:Hi all - using a Cassandra 4.0.1 and a spark job 
running against a large table (~8 billion rows) and I'm getting this error on the client side:Query timed out 
after PT2MOn the server side I see a lot of messages like:DEBUG [Native-Transport-Requests-39] 2022-02-03 
14:39:56,647 ReadCallback.java:119 - Timed out; received 0 of 1 responsesThe same code works on another table 
in the same Cassandra cluster that is about 300 million rows and completes in about 2 minutes.  The cluster is 
13 nodes.I can't find what PT2M means.  Perhaps the table needs a repair? Other ideas?Thank you!-Joe

Reply via email to