Hi Team,
Hope you are doing well!
We are doing performance testing of our applications. During testing our
postgres db got crashed with the error checkpointer was killed by signal 9.
When checked system logs we found it was OOM issue.
We have postgreSQL on independent node with 32GB RAM, multiple
Folks, thanks everyone for the valuable inputs, I think I more-or-less
understand now what the options are for my particular problem.
On Wed, Feb 26, 2025, at 17:14, Peter Geoghegan wrote:
> On Wed, Feb 26, 2025 at 10:40 AM wrote:
> > Does this mean that it is not possible to come up with a plan
Hi,
Here are some observations.
Em seg., 17 de mar. de 2025 às 09:19, escreveu:
> > PostgreSQL has a lot of overhead per row.
>
> Okay, thanks. I'm not actually too worried about this since in my
> scenario, each row is about 1.5 kB, so the % overhead is negligible.
>
> > It is probably not the
kimaidou writes:
> I have seen that this DECLARE has bad perf compared to a simple SQL query :
> Simple SQL query
> =
> https://explain.dalibo.com/plan/042bc4dc2449adfe
> 96ms
> DECLARE CURSOR for the same query
> =
> https://explain.dalibo.com/plan/bh83fc0db500a79g#
> 171 031 ms !!
Rai
Hi list !
I have a simple but big spatial table with approx 93 000 000 lines.
I use QGIS, the open-source GIS software to display this data. To fetch the
polygons to draw on QGIS map, QGIS launch a first DECLARE CURSOR query,
then fetch data 2000 by 2000.
I have seen that this DECLARE has bad pe
On Mon, Mar 17, 2025 at 4:19 AM wrote:
Can you help me understand why performing 3 million lookups on a b-tree
> index with all pages cached in memory takes so long?
It's not the lookup, it's writing the 3 million rows (and in this
particular upsert case, deleting 3 million, then inserting 3 m
On 2025-Mar-13, bill.po...@ymail.com wrote:
> I need to perform a merge (INSERT ... ON CONFLICT ... DO UPDATE) on
> the data, so sadly I cannot use COPY.
>
> I have discovered that for some reason, performing the original insert
> without the ON CONFLICT statement is twice as fast as performing t
Thanks Renan! Reducing the fill factor has improved my update performance and I
am now seeing the same time for updates as inserts.
I look forward to any advancements PostgreSQL may make in the future to improve
the performance of bulk DML operations. It would be amazing if they could be
par
> PostgreSQL has a lot of overhead per row.
Okay, thanks. I'm not actually too worried about this since in my scenario,
each row is about 1.5 kB, so the % overhead is negligible.
> It is probably not the lookup, but the *modification* of the index that is
> slow.
Yes that makes sense for the o
Thanks Laurenz for taking the time to look at this.
> That is unavoidable, because PostgreSQL adds a new version for each row to
> the table. To avoid that kind of bloat, you'd have to update in smaller
> batches and run VACUUM between those to free the "dead" row versions.
Yes, I would expect
10 matches
Mail list logo