[SQL] Problem with committing the update

2012-09-14 Thread BeeBee
Hi all,

I have a problem updating the record using store procedure (LANGUAGE
plpgsql).I have attach the query.
http://postgresql.1045698.n5.nabble.com/file/n5723831/query.txt query.txt 

/SELECT id INTO STRICT temp_id  from T_table_B where T_table_B.column_id =
$1 and T_table_B.status = 'False';  
EXCEPTION 
WHEN NO_DATA_FOUND THEN 
Update T_table_A Set a_flag = True Where T_table_A.id = $1;
WHEN TOO_MANY_ROWS THEN 
Update T_table_A Set a_flag = False Where T_table_A.id = $1;/

At the above query, when the first time execute the update statement, it
doesn't really update in database. Only second time then it is really
update. Seems like i need to commit the update statement? Anyway to solve? 

Thanks in advance 



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Problem-with-committing-the-update-tp5723831.html
Sent from the PostgreSQL - sql mailing list archive at Nabble.com.


-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] ERROR: missing FROM-clause entry for table "new"

2012-09-14 Thread James Sharrett
> I'm trying to define a trigger function that looks for changes in table A
> (table the trigger for the function is on) and write a delta record into table
> B.  So if a record has a value of 100 in table A, and it is updated to 50, the
> function should write ­50 in table B. I can get the trigger to work with
> static SQL statements but for the actual code, I need to use dynamic SQL
> because I need to alter the insert statement to B depending on what column in
> table A is altered.  I can get the correct SQL generated but when I execute
> the string inside the trigger function I get an error because it doesn't seem
> to be able to see the NEW table when it's run with EXECUTE.
> 
> So, this works in the trigger function:
> 
> Insert into A (col1,col2,ŠcolN)
> Select new.col1,new.co2Šnew.colN)
> 
> This doesn't:
> 
> sql := 'Insert into A (col1,col2,ŠcolN) ';
> sql := sql || 'Select new.col1,new.co2Šnew.colN)';
> Execute sql;
> 
> ERROR:  missing FROM-clause entry for table "new"
> 
> 
> 
> There is nothing wrong with the resulting code from sql because if I output
> the string and put it in as static SQL in my trigger it works.
> 
> 
> 
> How do I build the string within the trigger and execute it with a reference
> to NEW?
> 
> 
> Thanks in advance for the help,
> James
> 

Please read all of:

http://www.postgresql.org/docs/9.2/interactive/plpgsql-statements.html#PLPGS
QL-STATEMENTS-EXECUTING-DYN

But especially 39.5.4

You want to make use of format and/or USING to pass in the values to a
parameterized dynamic statement.

Note I linked to 9.2 but any recent version should have the behavior, if
different section numbers.

In short the whole "NEW.name" is a variable and you need to build the
statement the same way you would with any user-defined variable.

David J.


-

Thanks for the reference David.  I'm now able to get the sql statement to
run as dynamic sql with the following syntax

> sql := 'Insert into A (col1,col2,ŠcolN) ';
> sql := sql || 'values($1,$2,Š$N )';
> Execute sql USING new.col1,new.col2Šnew.colN
But that still leaves me with the problem that new.col1 ­ colN aren't known
till runtime.  My list of columns could vary from 5 to 50 depending on the
specific update scenario.  Inside the sql string I can dynamically build $1
- $N using a counter in my loop that gets the appropriate column list but
how do I dynamically build the USING list?  I tried put in a text variable
that contained a delimited list of columns as such:

list = new.col1,new.col2Šnew.colN
> sql := 'Insert into A (col1,col2,ŠcolN) ';
> sql := sql || 'values($1,$2,Š$N )';
> Execute sql USING list
> 
> But that gives the error:
>   ERROR:  there is no parameter $2
LINE 1: ...endcategory_id,time_id,metric,amount)  values ($1,$2,$3,$4,$...






Re: [SQL] pg_restore problem

2012-09-14 Thread Adrian Klaver

On 09/14/2012 01:58 AM, Kjell Øygard wrote:

1 - Ok, I was not aware of that
2 -  I used version 9.1.4 of pg_dump
3 - The command was in a script, se below

pdir=/usr/local/postgresql-9.1.4/
bdir=/backup/`hostname -s`/dump/
export PATH=${pdir}/bin:$PATH

# make sure tmp files are not readable by others
umask 0077

for db in `psql -l -t -h localhost | awk '{print $1}' |grep -v
template|grep -v postgres`
do
   pg_dump -h localhost -F c -Z -b $db > ${bdir}/${db}.tmp && mv
${bdir}/${db}.tmp ${bdir}/${db}.dump


I do not see anything obviously wrong.
Two suggestions.
1) Use the 9.2 version of pg_dump. Newer versions know about changes in 
data handling and are also backward compatible(to 7.0).

2) As of 8.3(I believe) the -b switch is redundant for whole database dumps.

When you do the above dump are there large objects in the 9.2 database 
in spite of the errors?





rgds Kjell Inge Ø





--
Adrian Klaver
[email protected]


--
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql