On Tue, Nov 12, 2024 at 1:35 AM Peter Geoghegan wrote:
> On Mon, Nov 11, 2024 at 12:41 AM yudhi s
> wrote:
> > So it means it will ensure no duplication happens for ID values, but
> still we are seeing "duplicate key" error. So what is the possible reason
> here or a
On Tue, Nov 12, 2024 at 3:38 PM yudhi s wrote:
> Can you share your thoughts on how exactly this merge query can possibly
> cause the duplicate key error?
MERGE doesn't take any special precautions to avoid such unique
violations. ON CONFLICT does. It really is that simple.
Your
On Mon, Nov 11, 2024 at 12:41 AM yudhi s wrote:
> So it means it will ensure no duplication happens for ID values, but still we
> are seeing "duplicate key" error. So what is the possible reason here or are
> we encountering any buggy behaviour here?
MERGE doesn't ac
On Mon, Nov 11, 2024 at 1:57 PM Achilleas Mantzios - cloud <
a.mantz...@cloud.gatewaynet.com> wrote:
>
> > WITH source_data (col1, col2, col3.col29) AS (VALUES ($1,
> > $2::date, $3::timestamptz, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13,
> > $14, $15, $16, $17, $18, $19, $20, $21, $22, $23,
on one of the
columns i.e ID which is the leading column of the PK.So it means it
will ensure no duplication happens for ID values, but still we are
seeing "duplicate key" error. So what is the possible reason here or
are we encountering any buggy behaviour here?
WITH source_da
ich is the
leading column of the PK.So it means it will ensure no duplication happens
for ID values, but still we are seeing "duplicate key" error. So what is
the possible reason here or are we encountering any buggy behaviour here?
WITH source_data (col1, col2, col3.col29) AS (VALU
Hi!
>I just meant a regular update (which might impact 0 rows) and then
insert (not exists) like you are doing already.
This causes duplicate key exception if other process adds same row to
table at same time.
>--transaction still ways. Should manual locking used or is there
better metho
I just meant a regular update (which might impact 0 rows) and then insert
(not exists) like you are doing already.
--transaction still ways. Should manual locking used or is there better
method.
I don't follow what you mean.
Hi
>Why just do a plain update, relying on row level locking to serialize
requests properly, and then just do an insert where not exists? Is there
value in doing the delete? I don't see it.
This is an option. How to do update+insert in 9+ in SQL ? Or should
plpgsql procedure created for th
Why just do a plain update, relying on row level locking to serialize
requests properly, and then just do an insert where not exists? Is there
value in doing the delete? I don't see it.
Note- On conflict clause is supported from 9.5+ and that is already past
EOL. Upgrading to at least v10 is recom
user
name. In this case one row should remain.
I can just ignore duplicate key error on commit but maybe there is some
better way not to cause error.
Andrus.
On 3/3/21 10:14 PM, Andrus wrote:
Hi!
says something else is inserting/updating using that key value. So
obviously your script is not catching all the conflicts.
> At this point your best bet is to monitor the Postgres log and see
what else is happening at the time of the error. I'm guessing
plicitly set a
serializable transaction would probably eliminate the duplicate key error.
There's no free lunch, though: you'd have to handle the blocking.
--
Angular momentum makes the world go 'round.
Hi!
says something else is inserting/updating using that key value. So
obviously your script is not catching all the conflicts.
> At this point your best bet is to monitor the Postgres log and see
what else is happening at the time of the error. I'm guessing you will
find another process work
On 3/3/21 11:59 AM, Andrus wrote:
Hi!
And if that name already exists there would be a duplicate key error.
Name cannot exist: First delete deletes its value. Tables are not
changed by external process during transaction. As double insurance,
insert perfoms additional existence check and
Hi!
>It sounds like this should be re-written as 'insert on conflict do
update' statement.
It should work in Postgres 9 also. on confilct is not available in
postgres 9.
This code does not insert duplicate key valuse ? Why it throws error ?
You can create testcase to run this code from mul
Hi!
And if that name already exists there would be a duplicate key error.
Name cannot exist: First delete deletes its value. Tables are not
changed by external process during transaction. As double insurance,
insert perfoms additional existence check and adds only if key does not
exist
It sounds like this should be re-written as 'insert on conflict do update'
statement.
On 3/3/21 8:08 AM, Andrus wrote:
Hi!
There is no other process inserting to this table?
There may be other processes in this server trying to insert same
primary key value (server name).
And if that name already exists there would be a duplicate key error.
Last inserted row data should
Hi!
There is no other process inserting to this table?
There may be other processes in this server trying to insert same
primary key value (server name).
Last inserted row data should remain.
Andrus.
On 3/3/21 7:23 AM, Andrus wrote:
Hi!
Sometimes duplicate key error
duplicate key value violates unique constraint "session_pkey"
Key (workplace)=(WIN-N9BSKUNKBC8 ) already exists.
occurs in script:
delete from session where workplace='WIN-N9BSKUNKBC8'
Hi!
Sometimes duplicate key error
duplicate key value violates unique constraint "session_pkey"
Key (workplace)=(WIN-N9BSKUNKBC8 ) already exists.
occurs in script:
delete from session where workplace='WIN-N9BSKUNKBC8' ;
INSERT INTO session (workplace
22 matches
Mail list logo