On 06/02/2016 08:37 PM, Patrick Baker wrote:
Hi guys,
*
The function works... All the data is updated as expected. However,
when I call the function for the second time, it touches the rows
that had already been touched by the previous call....
*
It triplicate ( |LIMIT 3| ) the records.
*Question:*
How can I make the function to gets the next 3 rows and not use the same
rows that have been used before?
Function updated:
|CREATEorREPLACE FUNCTIONfunction_data_1()RETURNS SETOF bigint
AS$$declarerowrecord;BEGIN-- copying the data to the backup table (not
the blobs)-- Limiting in 5000 rows each callFORrowINEXECUTE' SELECT
t1.file_id FROM table1 t1 JOIN table3 t3 ON t3.file_id = t1.file_id
ORDER BY 1 LIMIT 3 'LOOP -- Creating the backup table with the essential
dataINSERTINTOtable2
(note_id,size,file_id,full_path)(SELECTt1.note_id,t1.size,t1.file_id,t1.full_path
FROMtable1 t1 JOINtable3 t3 ONt3.file_id =t1.file_id WHEREt1.file_id
=row.file_id );-- copying the blobs to the table above
table2UPDATEjunk.table2 t2 SETdata =(SELECTo1.data FROMoriginal_table1_b
o1 JOINtable3 t3 ONt3.file_id =o1.file_id WHEREt3.migrated
=0ANDt2.file_id =o1.file_id ANDo1.file_id =row.file_id )WHEREt2.file_id
=row.file_id;-- updating the migrated column from 0 to 1UPDATEtable3 t2
SETmigrated =1WHEREt2.file_id =row.file_id ANDmigrated =0;-- set the
blobs as nullUPDATEoriginal_table1_b o1 SETdata =NULLWHEREo1.file_id
=row.file_id;ENDLOOP;END$$language 'plpgsql';|
|
|
"
CREATE or REPLACE FUNCTION function_data_1()
RETURNS SETOF bigint AS $$
declare
row record;
BEGIN
-- copying the data to the backup table (not the blobs)
-- Limiting in 5000 rows each call
FOR row IN EXECUTE '
SELECT
t1.file_id
FROM
table1 t1
JOIN
table3 t3 ON t3.file_id = t1.file_id
ORDER BY 1 LIMIT 3 '
LOOP
-- Creating the backup table with the essential data
INSERT INTO table2 (note_id, size, file_id, full_path)
(
SELECT
t1.note_id,
t1.size,
t1.file_id,
t1.full_path
FROM
table1 t1
JOIN
table3 t3 ON t3.file_id = t1.file_id
WHERE
t1.file_id = row.file_id
);
......."
Are you not repeating yourself, why not?:
CREATE or REPLACE FUNCTION function_data_1()
RETURNS SETOF bigint AS $$
declare
row record;
BEGIN
-- copying the data to the backup table (not the blobs)
-- Limiting in 5000 rows each call
FOR row IN EXECUTE '
SELECT
t1.file_id
t1.size,
t1.file_id,
t1.full_path
FROM
table1 t1
JOIN
table3 t3 ON t3.file_id = t1.file_id
ORDER BY 1 LIMIT 3 '
LOOP
-- Creating the backup table with the essential data
INSERT INTO table2 (row.note_id, row.size, row.file_id, row.full_path)
.....
Still not seeing what the JOIN to table3 t3 gets you?
Any way the function works.
--
Adrian Klaver
adrian.kla...@aklaver.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general