Tomas Vondra писал 2021-06-12 00:01:
On 6/9/21 1:08 PM, Tomas Vondra wrote:
On 6/9/21 12:50 PM, Bharath Rupireddy wrote:
On Wed, Jun 9, 2021 at 4:00 PM Tomas Vondra
<tomas.von...@enterprisedb.com> wrote:
Hi,
Here's a v2 fixing a silly bug with reusing the same variable in two
nested loops (worked for simple postgres_fdw cases, but "make check"
failed).
I applied these patches and ran make check in postgres_fdw contrib
module, I saw a server crash. Is it the same failure you were saying
above?
Nope, that was causing infinite loop. This is jut a silly mistake on
my
side - I forgot to replace the i/j variable inside the loop. Here's
v3.
regards
FWIW I've pushed this, after improving the comments a little bit.
regards
Hi.
It seems this commit
commit b676ac443b6a83558d4701b2dd9491c0b37e17c4
Author: Tomas Vondra <tomas.von...@postgresql.org>
Date: Fri Jun 11 20:19:48 2021 +0200
Optimize creation of slots for FDW bulk inserts
has broken batch insert for partitions with unique indexes.
Earlier the case worked as expected, inserting 1000 tuples. Now it exits
with
ERROR: duplicate key value violates unique constraint "p0_pkey"
DETAIL: Key (x)=(1) already exists.
CONTEXT: remote SQL command: INSERT INTO public.batch_table_p0(x,
field1, field2) VALUES ($1, $2, $3), ($4, $5, $6), ($7, $8, $9), ($10,
$11, $12), ($13, $14, $15), ($16, $17, $18), ($19, $20, $21), ($22, $23,
$24), ($25, $26, $27), ($28, $29, $30), ($31, $32, $33), ($34, $35,
$36), ($37, $38, $39), ($40, $41, $42), ($43, $44, $45), ($46, $47,
$48), ($49, $50, $51), ($52, $53, $54), ($55, $56, $57), ($58, $59,
$60), ($61, $62, $63), ($64, $65, $66), ($67, $68, $69), ($70, $71,
$72), ($73, $74, $75), ($76, $77, $78), ($79, $80, $81), ($82, $83,
$84), ($85, $86, $87), ($88, $89, $90), ($91, $92, $93), ($94, $95,
$96), ($97, $98, $99), ($100, $101, $102), ($103, $104, $105), ($106,
$107, $108), ($109, $110, $111), ($112, $113, $114), ($115, $116, $117),
($118, $119, $120), ($121, $122, $123), ($124, $125, $126), ($127, $128,
$129), ($130, $131, $132), ($133, $134, $135), ($136, $137, $138),
($139, $140, $141), ($142, $143, $144), ($145, $146, $147), ($148, $149,
$150), ($151, $152, $153), ($154, $155, $156), ($157, $158, $159),
($160, $161, $162), ($163, $164, $165), ($166, $167, $168), ($169, $170,
$171), ($172, $173, $174), ($175, $176, $177), ($178, $179, $180),
($181, $182, $183), ($184, $185, $186), ($187, $188, $189), ($190, $191,
$192), ($193, $194, $195), ($196, $197, $198), ($199, $200, $201),
($202, $203, $204), ($205, $206, $207), ($208, $209, $210), ($211, $212,
$213), ($214, $215, $216), ($217, $218, $219), ($220, $221, $222),
($223, $224, $225), ($226, $227, $228), ($229, $230, $231), ($232, $233,
$234), ($235, $236, $237), ($238, $239, $240), ($241, $242, $243),
($244, $245, $246), ($247, $248, $249), ($250, $251, $252), ($253, $254,
$255), ($256, $257, $258), ($259, $260, $261), ($262, $263, $264),
($265, $266, $267), ($268, $269, $270), ($271, $272, $273), ($274, $275,
$276), ($277, $278, $279), ($280, $281, $282), ($283, $284, $285),
($286, $287, $288), ($289, $290, $291), ($292, $293, $294), ($295, $296,
$297), ($298, $299, $300)
--
Best regards,
Alexander Pyhalov,
Postgres Professional
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 8cb2148f1f6..4c280f1e777 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -3083,7 +3083,46 @@ UPDATE batch_cp_upd_test t SET a = 1 FROM (VALUES (1), (2)) s(a) WHERE t.a = s.a
SELECT tableoid::regclass, * FROM batch_cp_upd_test;
-- Clean up
-DROP TABLE batch_table, batch_cp_upd_test CASCADE;
+DROP TABLE batch_table, batch_cp_upd_test, batch_table_p0, batch_table_p1 CASCADE;
+
+-- Use partitioning
+ALTER SERVER loopback OPTIONS (ADD batch_size '100');
+
+CREATE TABLE batch_table ( x int, field1 text, field2 text) PARTITION BY HASH (x);
+
+CREATE TABLE batch_table_p0 (LIKE batch_table);
+ALTER TABLE batch_table_p0 ADD CONSTRAINT p0_pkey PRIMARY KEY (x);
+CREATE FOREIGN TABLE batch_table_p0f
+ PARTITION OF batch_table
+ FOR VALUES WITH (MODULUS 4, REMAINDER 0)
+ SERVER loopback
+ OPTIONS (table_name 'batch_table_p0');
+
+CREATE TABLE batch_table_p1 (LIKE batch_table);
+ALTER TABLE batch_table_p1 ADD CONSTRAINT p1_pkey PRIMARY KEY (x);
+CREATE FOREIGN TABLE batch_table_p1f
+ PARTITION OF batch_table
+ FOR VALUES WITH (MODULUS 4, REMAINDER 1)
+ SERVER loopback
+ OPTIONS (table_name 'batch_table_p1');
+
+CREATE TABLE batch_table_p2
+ PARTITION OF batch_table
+ FOR VALUES WITH (MODULUS 4, REMAINDER 2);
+ALTER TABLE batch_table_p2 ADD CONSTRAINT p2_pkey PRIMARY KEY (x);
+
+CREATE TABLE batch_table_p3 (LIKE batch_table);
+ALTER TABLE batch_table_p3 ADD CONSTRAINT p3_pkey PRIMARY KEY (x);
+CREATE FOREIGN TABLE batch_table_p3f
+ PARTITION OF batch_table
+ FOR VALUES WITH (MODULUS 4, REMAINDER 3)
+ SERVER loopback
+ OPTIONS (table_name 'batch_table_p3');
+
+INSERT INTO batch_table SELECT i, 'test'||i, 'test'|| i FROM generate_series(1, 1000) i;
+SELECT COUNT(*) FROM batch_table;
+
+ALTER SERVER loopback OPTIONS (DROP batch_size);
-- ===================================================================
-- test asynchronous execution