I think your concern is that 20 min + 30 min does not equal 3 hours.
It might be natural to think the contention would, at max, be 50 min x 2 (1
hr 40 min).
So what's going on?
It seems disk I/O is a primary suspect since you hint for an iostat
replacement inside of Postgres.
If it is due to d
Check the index creation has NULLS FIRST (or LAST) on both indexes that are
used. Use explain to see what indexes are used
See docs for create index:
https://www.postgresql.org/docs/current/sql-createindex.html
On Thu, May 11, 2023, 7:30 AM Marc Millas wrote:
> Hi,
>
> I keep on investigating o
Do you mean like this?
postgres=# \set v_embed %:v_ssn%
postgres=# \echo :v_embed
%345%
postgres=# SELECT * FROM employee WHERE ssn LIKE :'v_embed';
ssn| name
---+--
123456789 | John Doe
(1 row)
On Fri, Jan 13, 2023 at 8:12 AM Ron wrote:
>
> Pg 12
>
> I need to pas
The logs were helpful. You may want to see the statements around the
errors, as more detail may be there such as the SQL statement associated
with the error.
Deadlocks are an indicator that the client code needs to be examined for
improvement. See
https://www.cybertec-postgresql.com/en/postgres
If you get a chance, showing the `top` output might be useful as well. For
instance if you are low on memory, it can slow down the allocation of
buffers. Another thing to look at is `iostat -x -y` and look at disk util
%. This is an indicator, but not definitive, of how much disk access is
going
select count(*) from is probably not using the index that your
insert/select would, so I would not use that as a test for performance.
If customer_backup has an index, the insert-select will be
performance-limited by updating that index.
If you can do a *create table customer_backup* as
*select
What if a person is a member of more than one Org? Consider a person_org
table.
I see mention of a site in the person table. It may also be the case that
you need a site table.
Often, you want a table for the Person and a Contact (or Address) table
separately. This allows for having more than
Anyone know if this can be remedied?
ERROR: out of overflow pages in hash index
This occurred while waiting for a hash index to be created on a 1,580
GB table (Postgres 11.7). We have > 30 TB of space free.
Here's the line where I think it gets triggered.
https://github.com/postgres/post