Re: [SQL] Query runs very slowly in Postgres, but very fast in other DBMS
>> SELECT dokumnr FROM rid WHERE dokumnr NOT IN >> (select dokumnr FROM dok); > ... >> Is it possible to speed up this query is Postgres ? How to force Postgres >> to use indexes for this query ? > > Use IN and NOT IN only for small sets. Use JOIN (instead of IN) and LEFT > JOIN (instead of NOT IN) for larger sets. e.g.: > > SELECT rid.dokumnr > FROM rid > LEFT JOIN dok ON (dok.dokumnr = rid.dokumnr) > WHERE dok.dokumnr iS NULL; Thank you. How to use this technique to speed up the update statement UPDATE rid SET dokumnr=NULL WHERE dokumnr NOT IN (SELECT dokumnr FROM dok); and DELETE statement DELETE FROM rid WHERE dokumnr NOT IN (SELECT dokumnr FROM dok); Andrus ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] select a list of schema names
I want to implement a multi-company database where each schema represents different company. I created a number of schemas in a database. How to select a list of schema names which current user is authorized to access ? I want to get the list of companies to allow user pick the one. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[SQL] How to make update statement to work
I want to nullify fields which does not exist in reference table.
I tried
UPDATE demo.toode
SET "liik"=NULL,"grupp"=NULL
WHERE ("grupp","liik") NOT IN
(SELECT ("grupp", "liik") FROM "artliik")
but this causes error:
ERROR: operator does not exist: character = record
HINT: No operator matches the given name and argument type(s). You may need
to add explicit type casts.
How to write this UPDATE statement properly ?
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
[SQL] Merging item codes using referential integrity
I have item table and a lot of child tables where the items are used.
I want to merge two item codes into single item in all tables.
It is not nice to write a lot of separate UPDATE statements for each table.
So I want to utilize REFERENCES clause for merging.
I tried the following code but got duplicate key error in UPDATE
statement.
Any idea how to impement this?
CREATE TABLE parent ( code CHAR(10) PRIMARY KEY );
CREATE TABLE orders ( anyfield CHAR(10) REFERENCES parent ON UPDATE
CASCADE );
CREATE TABLE invoices ( anyotherfield CHAR(10) REFERENCES parent ON UPDATE
CASCADE );
-- ... a lot of more child tables with different table and field names
but -- always with same REFERENCES clause.
INSERT INTO parent VALUES ('1');
INSERT INTO parent VALUES ('2');
INSERT INTO orders VALUES ('1');
INSERT INTO invoices VALUES ('1');
INSERT INTO orders VALUES ('2');
INSERT INTO invoices VALUES ('2');
BEGIN;
-- Direct Postgres to update all child tables. This causes error.
UPDATE parent SET code='1' WHERE code='2';
-- Remove duplicate row
CREATE TABLE parent AS
SELECT * FROM parent
GROUP BY CODE ;
COMMIT;
Andrus.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Re: [SQL] Merging item codes using referential integrity
> Andrus Moor wrote:
>> I have item table and a lot of child tables where the items are used.
>> I want to merge two item codes into single item in all tables.
>> It is not nice to write a lot of separate UPDATE statements for each
>> table.
>> So I want to utilize REFERENCES clause for merging.
>>
>> I tried the following code but got duplicate key error in UPDATE
>> statement.
>>
>> Any idea how to impement this?
>>
>> CREATE TABLE parent ( code CHAR(10) PRIMARY KEY );
>
>> BEGIN;
>> -- Direct Postgres to update all child tables. This causes error.
>> UPDATE parent SET code='1' WHERE code='2';
>> -- Remove duplicate row
>
> That's the problem - you can't have a duplicate row at *any* time with a
> primary key. The UNIQUE constraint is instant and can't be deferred (at
> least, not yet).
>
> However, in this case I would simply write a function:
>
> CREATE FUNCTION merge_all(char(10), char(10) AS '
> UPDATE table_1 SET col_1=$2 WHERE col1=$1;
> UPDATE table_2 SET col_2=$2 WHERE col2=$2;
> ...etc...
> ' LANGUAGE SQL;
>
> Then: SELECT merge_all('OLD_VAL','NEW_VAL') for each value (you could even
> join to your "parent" table if all the values are in there). All the
> updates in the function take place in the same transaction, so if there
> are any problems then all changes will be rolled back.
Richard,
thank you.
Is is possible to determine table_1 , col_1 etc values automatically.
I have some hundreds of referential intgrety constraints which are changing.
So I must write and maintains hundres of additional lines of code which
duplicates existing referential integrity information.
I'm researching the following method:
Input:
Master table name $master and two its primary key values $value1 and
$value2
Output:
1. All $value2 field values in child tables are update to $value1
2. $value2 record is deleted from $master table
Algorithm:
SELECT
childtablename,
childfieldname
FROM pg_referentialinfo
WHERE pg_referentialinfo.mastertable=$master
INTO CURSOR childs;
BEGIN TRANSACTION;
SCAN FOR ALL childs RECORDS;
UPDATE (childs.childtablename) set (childs.childfieldname)=$value2
WHERE EVAL(childs.childfieldname)=$value1;
ENDSCAN;
SELECT
primarykeyfieldname
FROM pg_tables
WHERE pg_tables.tablename=$master
INTO CURSOR mfield;
DELETE FROM $master WHERE EVAL(mfield.primarykeyfieldname)=$value2;
COMMIT;
How to implement SCAN FOR ALL childs RECORDS in PostgreSQL ?
Andrus.
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
[SQL] Query runs very slowly in Postgres, but very fast in other DBMS
Tables: CREATE TABLE dok ( dokumnr NUMERIC(12), CONSTRAINT dok_pkey PRIMARY KEY (dokumnr) ); CREATE TABLE rid ( dokumnr NUMERIC(12) ); CREATE INDEX rid_dokumnr_idx ON rid (dokumnr); Query: SELECT dokumnr FROM rid WHERE dokumnr NOT IN (select dokumnr FROM dok); runs VERY slowly in Postgres. It uses the following query plan: Seq Scan on rid (cost=0.00..28698461.07 rows=32201 width=14) Filter: (NOT (subplan)) SubPlan -> Seq Scan on dok (cost=0.00..864.29 rows=10729 width=14) In Microsoft Visual FoxPro this query runs fast. FoxPro uses indexes speed up the query by comparing bitmaps. Is it possible to speed up this query is Postgres ? How to force Postgres to use indexes for this query ? Andrus ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
