Make sure to run analyze on the entire database, possibly using vacuumdb would 
be faster.
Also, check for invalid indexes.
Efrain J. Berdecia 

    On Wednesday, November 20, 2024 at 08:02:36 AM EST, Daniel Gustafsson 
<dan...@yesql.se> wrote:  
 
 > On 20 Nov 2024, at 11:50, Sreejith P <sreej...@lifetrenz.com> wrote:

> We are using PostgresQL 10 in our production database.  We have around 890 
> req /s request on peak time.

PostgreSQL 10 is well out of support and does not receive bugfixes or security
fixes, you should plan a migration to a supported version sooner rather than
later.

> 2 days back we applied some patches in the primary server and restarted. We 
> didn't do anything on the secondary server.

Patches to the operating system, postgres, another application?

> Next day, After 18 hours all our queries from secondary servers started 
> taking too much time.  queries were working in 2 sec started taking 80 
> seconds. Almost all queries behaved the same way.
> 
> After half an hour of outage we restarted all db servers and system back to 
> normal.
> 
> Still we are not able to understand the root case. We couldn't find any error 
> log or fatal errors.  During the incident, in  one of the read server disks 
> was full. We couldn't see any replication lag or query cancellation due to 
> replication.

You say that all queries started doing sequential scans, is that an assumption
from queries being slow or did you capture plans for the queries which be
compared against "normal" production plans?

--
Daniel Gustafsson



  

Reply via email to