On Fri, 26 Jul 2024 at 19:55, Francisco Olarte wrote:
> " -> Index Scan using
> ""cl_student_semester_subject_IX3"" on cl_student_semester_subject p
> (cost=0.55..8.57 rows=1 width=60) (actual time=0.033..55.702
> rows=41764 loops=1)"
> "Index Cond: (((companycode
Hi Siva,
pg_dump taken from client_db seems not to have transferred full data to
client_test.
When I examine the query plans, the rows scanned seem to be extremely
different. For example, there is 1 row in cl_level table on client_test,
while clined_db shows that around 300,000 records were scann
Hello:
On Fri, 26 Jul 2024 at 07:31, sivapostg...@yahoo.com
wrote:
...
> Took backup (pg_dump) of first database (client_db) and restored the database
> as second database (client_test).
...
> The query when run against DB1 takes around 7 min 32 seconds.
> The same query when run against DB2 tak
Hi Again,
I have not gone through your explain plans for both, will it be possible
for you to take diff of the both plans. It will give some insight about how
things are going and where the problem is.
Alternatively
I will suggest Reindex and execute ANALYZE command to regenerate stats .
Regar
I've only one instance of PG in that server. Means only one postgresql.conf
for both databases.
On Friday, 26 July, 2024 at 11:12:34 am IST, Muhammad Ikram
wrote:
Hi,
Could you perform diff on postgresql.conf file to see whether values are same
for work_mem, shared_buffers, maintena
Hi,
Could you perform diff on postgresql.conf file to see whether values are
same for work_mem, shared_buffers, maintenance_work_mem and other related
parameters?
Regards,
Ikram
On Fri, Jul 26, 2024 at 10:31 AM sivapostg...@yahoo.com <
sivapostg...@yahoo.com> wrote:
> Hello,
> Using PG 11.11,
Hello,Using PG 11.11, One PG Cluster, Windows 2019 Server Standard, Two
databases with identical data.
1. First DB: client_db
2. Second DB: client_test
Took backup (pg_dump) of first database (client_db) and restored the database
as second database (client_test).
Query:
Select a.examname, a.r
po 9. 12. 2019 v 21:05 odesílatel saket bansal napsal:
> Thank you Michael. I re-wrote it and it does perform well. Modified query
> at:
>
>
> https://github.com/bansalsaket/PG_correlated_subquery_slowness/blob/master/Modified%20query%20-%20performs%20faster.txt
>
> Our app team is checking with
Thank you Michael. I re-wrote it and it does perform well. Modified query
at:
https://github.com/bansalsaket/PG_correlated_subquery_slowness/blob/master/Modified%20query%20-%20performs%20faster.txt
Our app team is checking with their vendor whether this can be modified at
source code level or not
Thanks Justin for pointing this out.
More work for optimizer for nothing, I will remove it.
On Mon, Dec 9, 2019 at 2:48 PM Justin wrote:
> Hi Saket
>
> The first filter condition seems to be duplicated it appears this can be
> simplified from
>
> and ( pdtaltrelt0_.status_typ_dbky=102
>
Hi Saket
The first filter condition seems to be duplicated it appears this can be
simplified from
and ( pdtaltrelt0_.status_typ_dbky=102
and ( pdtaltrelt0_.rule_status_typ_dbky is null )
or pdtaltrelt0_.status_typ_dbky in ( 19 )
or pdtaltrelt0_.status_typ_dbky in (20 )
I'd suggest re-writing your query to avoid ORs whenever possible. Is this
generated by an ORM or subject to change with filters selected in
application or can you totally control it on DB side?
It may be hugely more performant to simply rewrite this as (almost) the
same query twice UNION ALL'd tog
Hi Postgres Experts,
Please help me on a query tuning.
Postgres verson: 11.5
This database has been migrated from oracle 12c to postgres. In Oracle
query executes in 2-3 secs, but in postgres it hangs forever. There are no
transactions at this time, I am stuck at first run after migration.
My an
13 matches
Mail list logo