I don't recall the details, but I know v12 included significant
enhancements to lock relations later in the process such that when
targeting relatively few of the partitions, it can be a major performance
boost.
> pg_locks shows exactly two types of locks: "heavy" locks (which are not
> merely relation locks but also object, tuple, extension, transaction and
> advisory locks), and predicate locks (SIReadLock) which are limited by a
> separate configuration parameter (and are not relevant in this case
> bas
> Is it expected that a lock on a partitioned table will take out 2 locks
> per
> > child regardless of the number of children which are excluded at plan
> time?
>
> Depends on the details of your query, and on which PG version you're
> using, but it's by no means surprising for each child table to
It's typically recommended that you use the pg_dump of the same version as
the pg_restore. In your case, the v12 pg_dump *will* successfully read from
a 10.1 database.
On 12/19/19 9:24 AM, Sar wrote:
for pg_dump it is: PostgreSQL 10.1, compiled by Visual C++ build 1800, 64-bit
for pg_restore
> (Or you could use serializable mode, but that feels like using a hammer to
> swat a fly.)
Do you mean the serializable transaction isolation level? Because that doesn't
work either. Here (finally) is a tiny repro case. You'll need 2 psql sessions
(S1, S2):
S1: CREATE TABLE t (id integer):
S1
for pg_dump it is: PostgreSQL 10.1, compiled by Visual C++ build 1800, 64-
bit
for pg_restore it is the same machine as well as newest version of
Postgresql 12, so I assume the error is in pg_dump.
selects work no problem, I even dumped one table with COPY and it doesnt
raise any issue. The other
On 19/12/2019 13:58, Maksim Milyutin wrote:
On 19.12.2019 14:04, Andrey Borodin wrote:
Hi!
Hi!
FYI, this topic was up recently in -hackers
https://www.postgresql.org/message-id/caeet0zhg5off7iecby6tzadh1moslmfz1hlm311p9vot7z+...@mail.gmail.com
I cannot figure out proper way to impl
> "Peter" == Peter Eisentraut writes:
Peter> max_locks_per_transactions only affects relation locks (also
Peter> known as heavy weight locks), but pg_locks also shows other
Peter> kinds of locks.
pg_locks shows exactly two types of locks: "heavy" locks (which are not
merely relation locks
On 19.12.2019 14:04, Andrey Borodin wrote:
Hi!
Hi!
FYI, this topic was up recently in -hackers
https://www.postgresql.org/message-id/caeet0zhg5off7iecby6tzadh1moslmfz1hlm311p9vot7z+...@mail.gmail.com
I cannot figure out proper way to implement safe HA upsert. I will be very
grateful if
James Sewell writes:
> Is it expected that a lock on a partitioned table will take out 2 locks per
> child regardless of the number of children which are excluded at plan time?
Depends on the details of your query, and on which PG version you're
using, but it's by no means surprising for each chi
On 19/12/2019 12:25, Andrey Borodin wrote:
Hi Fabio!
Thanks for looking into this.
19 дек. 2019 г., в 17:14, Fabio Ugo Venchiarutti
написал(а):
You're hitting the CAP theorem ( https://en.wikipedia.org/wiki/CAP_theorem )
You cannot do it with fewer than 3 nodes, as the moment you set
Hi Fabio!
Thanks for looking into this.
> 19 дек. 2019 г., в 17:14, Fabio Ugo Venchiarutti
> написал(а):
>
>
> You're hitting the CAP theorem ( https://en.wikipedia.org/wiki/CAP_theorem )
>
>
> You cannot do it with fewer than 3 nodes, as the moment you set your standby
> to synchronous to
On 19/12/2019 11:04, Andrey Borodin wrote:
Hi!
I cannot figure out proper way to implement safe HA upsert. I will be very
grateful if someone would help me.
Imagine we have primary server after failover. It is network-partitioned. We
are doing INSERT ON CONFLICT DO NOTHING; that eventual
On 2019-12-19 10:33, James Sewell wrote:
I have a system which is giving me the log hint to increase
max_locks_per_transaction. This is somewhat expected due to the workload
- but what I can't understand is the numbers:
Based on the docs I calculate my theoretical max locks as:
max_locks_per_
Hi!
I cannot figure out proper way to implement safe HA upsert. I will be very
grateful if someone would help me.
Imagine we have primary server after failover. It is network-partitioned. We
are doing INSERT ON CONFLICT DO NOTHING; that eventually timed out.
az1-grx88oegoy6mrv2i/db1 M > WITH n
Is it expected that a lock on a partitioned table will take out 2 locks per
child regardless of the number of children which are excluded at plan time?
For example I can select count(*) from a table with 3500 partitions in
transaction, and see 7000 AccessShareLocks show up till I finish the
transa
On Thu, Dec 19, 2019 at 9:20 AM Simon Riggs wrote:
> On Wed, 18 Dec 2019 at 23:13, Matthew Phillips
> wrote:
>
>
>> With the current READ UNCOMMITTED discussion happening on pgsql-hackers
>> [1], It did raise a question/use-case I recently encountered and could not
>> find a satisfactory solutio
Hello all,
I have a system which is giving me the log hint to increase
max_locks_per_transaction. This is somewhat expected due to the workload -
but what I can't understand is the numbers:
Based on the docs I calculate my theoretical max locks as:
max_locks_per_transaction * (max_connections +
On Wed, 18 Dec 2019 at 23:13, Matthew Phillips
wrote:
> With the current READ UNCOMMITTED discussion happening on pgsql-hackers
> [1], It did raise a question/use-case I recently encountered and could not
> find a satisfactory solution for. If someone is attempting to poll for new
> records on a
19 matches
Mail list logo