Fw: [SQL] Viewing a function
-Original Message- From: Stuart Foster <[EMAIL PROTECTED]> To: PG-SQL <[EMAIL PROTECTED]> Date: Wednesday, 30 August 2000 2:25 Subject: [SQL] Viewing a function Helllo Stuart, Good question. I have been fiddly with a function editor using zeos controls and I have lots of little problems, so that I have given up for a while. I successfully parsed function information. (The following is from memory and I dont have postgres with me right now). The body of the function is in pg_lang.prosrc (I think) or something like that (I found out most stuff by looking in the c code for psql. I work in delphi) You can work out param numbers from another field and work out the field types (I confess Icheated here. You can get field type out of system tables but I never actually worked out how). I could reconstruct a nice looking create function string using data in the data base. When I tried it execute things from windows things went wrong but it work in psql. I ended up just using it as a utility to create separated create function files wich I then loaded server side. Anyway... then problems. I dont think I ever had a successful update from client side. Once I committed a change I often could not successfully drop and create a function and vacuum would fail. To recover I had to drop all functions and re-create them. Using PERFORM seemed to cause particular problems. (Think that anything that is called by perform must be compiled in before anything that calls it. Order of creation seems to important. A full dump and create would fix these sort of problems) The cr/lf thing also gave problems although I wrote what I thought was a client side fix for this. You can have my source in Delphi (such as it is) if you are interested. I think there must be flags or dependencies with function managemnt which I dont understand. PS I have not had success editing function in pgaccess either (similar problems) A simple way to view functions is pg_dump -c -s database to just dump schema and drop statements. Cut and paste function definitions. The difficulty of function editing has retarded my putting tasks server-side which belong there. >How can a view a function after it's created ? >I've created a SQL function that I need to review and possibly change. What >is the best way to go about this. > > >TIA > Another stuart
Re: [SQL] referencial conlumn contraints and inheritance
Stephan Szabo and Josh Berkus, Thanks for your input. The question at hand seemed to provide an appealing argument for providing inheritance support for referencial constraints, but if its support is as rare as it seems to be then that could produce problems if I want the ability to be able to manage the data with tools on other platforms. I guess I will have to give potential alternatives more consideration. Thanks again, Stuart ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] referencial conlumn contraints and inheritance
Folks, I am not sure where to post this. This may be more of a wishlist item than a usage question. I am new to postgresql 7.1. Table inheritance in postgres seems wonderfull. Is there the capability now or any plans to extend support for inherited tables in column constraints. One example is a travel db where several tables are defined as: create table areas ( area_id char(10) primary key, name varchar(40), . description varchar(120) ); create table countries ( . . ) inherits (areas); create table states ( . country_id char(10) references countries ..., . . ) inherits (areas); create table cities ( . country_id char(10) references countries ..., . .) inherits (areas); Then a destination table set up with a field referencing areas. Currently postgres appears not to support reference contraints checking through inherited tables. Nor does it appear to support subselects in check constraints. It would be nice to allow reference checking through the inherited tables so destination.area_id would be valid for any country, state, or city area_id entered. create table destination ( . area_id char(10) references areas ..., . .); Thanks, Stuart ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] Inheritance and Referencial Integrity
I'm using 7.3.2 which extends the inheritance of some checks and triggers to inherited tables, but still falls short of foreign key referencees looking into child tables. Are there plans to address these defficencies and when might we see this take place. Thanks Stuart ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] Count dates distinct within an interval
Dmitry Tkach wrote:
Hi, everybody!
I was trying to formulate a sql query, that I now think is impossible :-(
I just thought, I'd run it by you before I give up - perhaps, you guys
would have some ideas...
Suppose, I have a table like this
create table test
(
stuff int,
stamp timestamp
);
Now, I want to count the occurences of each value of stuff in the table,
but so that entries within 24 hours from each other count as one...
The closest I could think of is:
select stuff, count (distinct date_trunc ('day', stamp)) from test group
by stuff;
This doesn't do exactly what I need though - for example, if one entry
is one minute before midnight, and the other one is two minutes later,
they'd get counted as 2, and what I am looking for is the way to get
them collapsed into one as long as they are less then 24 hours apart...
Now, I am pretty sure, it is simply impossible to do what I want with
count (distinct...) because my 'equality' is not transitive - for
example, three entries, like
A = 2001 - 01- 01 20:20:00
B = 2001 - 01 - 02 20:19:00
C = 2001 - 01 - 02 20:21:00
Should be counted as *two* (A === B, and B === C, but *not* A === C)...
Also, I could certainly write a simple function, that would get all the
entries in order, and scan through them, counting according to my rules...
But I was hoping to find some way to do this in plain sql though...
Any ideas?
Thanks!
Dima
You would probably be able to speed the following up using immutable
funtions to aid the query, or just a function to do it. However I think
this does what you asked in a query. I've put a script at the end.
hth,
- Stuart
-- s is the stuff to group by
-- dt is the datetime thing
create table Q (
s int4,
dt timestamptz);
truncate Q;
INSERT INTO Q (s,dt) VALUES (1,'2003/01/01'::timestamptz);
INSERT INTO Q (s,dt) VALUES (1,'2003/01/02 04:00'::timestamptz);
INSERT INTO Q (s,dt) VALUES (1,'2003/01/01 08:00'::timestamptz);
INSERT INTO Q (s,dt) VALUES (1,'2003/01/02'::timestamptz);
INSERT INTO Q (s,dt) VALUES (1,'2003/01/05 23:00'::timestamptz);
INSERT INTO Q (s,dt) VALUES (1,'2003/01/06 22:00'::timestamptz);
INSERT INTO Q (s,dt) VALUES (1,'2003/01/07 05:00'::timestamptz);
INSERT INTO Q (s,dt) VALUES (3,'2003/01/01'::timestamptz);
INSERT INTO Q (s,dt) VALUES (4,'2003/01/01'::timestamptz);
INSERT INTO Q (s,dt) VALUES (4,'2003/01/02 05:00'::timestamptz);
INSERT INTO Q (s,dt) VALUES (4,'2003/01/03 04:00'::timestamptz);
SELECT COUNT(*),s FROM Q WHERE dt IN (SELECT min(R.dt) FROM Q AS R WHERE
Q.s=R.s)
OR dt IN (SELECT min(P.dt) FROM Q AS P WHERE P.s=Q.s AND
P.dt>=date_trunc('day',Q.dt)-
CASE WHEN
(SELECT max(dt)::time FROM Q AS T WHERE Q.s=T.s AND T.dt
(SELECT * FROM Q AS U WHERE T.dt-'1 day'::intervalU.dt
AND U.s=Q.s))
>Q.dt::time THEN '1 day'::interval
ELSE
'0 day'::interval
END
+(SELECT max(dt)::time FROM Q AS T WHERE Q.s=T.s AND T.dt
(SELECT * FROM Q AS U WHERE T.dt-'1 day'::intervalU.dt
AND U.s=Q.s)))
GROUP BY s;
---(end of broadcast)---
TIP 8: explain analyze is your friend
[SQL] Auto-update a field when record is changed
Folks, I was wandering if there was a feasible way to automatically update a field in a table, say 'revision_date' in a record whenever any other field in the record is changed. My attempts to use a trigger caused repeating loops and bombed with error. I would like to be able to update such a field automatically anytime a record was updated. Any help would be appreciated. Thanks, Stuart ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] above the date functionssssssss
Penchal reddy,
You should be able to use extract or date_trunc functions like in the
documentation:
example:
SELECT EXTRACT(MONTH FROM DATE '2005-02-12') | "-" | EXTRACT(YEAR FROM DATE
'2005-02-12');
or
SELECT date_trunc('month', DATE '2005-02-12') | "-" | date_trunc('year', DATE
'2005-02-12');
Stuart
-Original Message-
>From: "Penchalaiah P." <[EMAIL PROTECTED]>
>Sent: Jul 11, 2006 7:34 AM
>To: [email protected]
>Subject: [SQL] above the date function
>
>
>Hi good evening to every one.
>
>
>i want to get month and year in the current date..
>see this is date 2005-02-12... here date is 12 , month is 02 and year is
>2005..
>but i want format like 02-2005...
>
>can u tell me how to do this
>
>
>
>
>Thanks & Regards
>
>Penchal reddy | Software Engineer
>
>
>Infinite Computer Solutions | Exciting Times...Infinite Possibilities...
>
>
>SEI-CMMI level 5 | ISO 9001:2000
>
>IT SERVICES | BPO
>
>
>Telecom | Finance | Healthcare | Manufacturing | Energy & Utilities |
>Retail & Distribution | Government
>
>
>Tel +91-80-5193-(Ext:503)| Fax +91-80-51930009 | Cell No
>+91-9980012376|www.infics.com
>
>
>Information transmitted by this e-mail is proprietary to Infinite
>Computer Solutions and/ or its Customers and is intended for use only by
>the individual or entity to which it is addressed, and may contain
>information that is privileged, confidential or exempt from disclosure
>under applicable law. If you are not the intended recipient or it
>appears that this mail has been forwarded to you without proper
>authority, you are notified that any use or dissemination of this
>information in any manner is strictly prohibited. In such cases, please
>notify us immediately at [EMAIL PROTECTED] and delete this mail from
>your records.
>
>
>
>
>
>
>Information transmitted by this e-mail is proprietary to Infinite Computer
>Solutions and / or its Customers and is intended for use only by the
>individual or the entity to which it is addressed, and may contain information
>that is privileged, confidential or exempt from disclosure under applicable
>law. If you are not the intended recipient or it appears that this mail has
>been forwarded to you without proper authority, you are notified that any use
>or dissemination of this information in any manner is strictly prohibited. In
>such cases, please notify us immediately at [EMAIL PROTECTED] and delete this
>email from your records.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] [Trigger] Help needed with NEW.* and TG_TABLE_NAME
Torsten, Usually, the “insert ... (select ...)” has the select portion enclosed in parenthesis. Don't know if solution is that simple but did not see it in your examples. It may be worth a try. Stuart - Original message - > Hello, > > i have a problem with a trigger written in pl/pgsql. > > It looks like this: > > CREATE OR REPLACE FUNCTION versionize() > RETURNS TRIGGER > AS $$ > BEGIN > > NEW.revision := addContentRevision (OLD.content_id, OLD.revision); > > /* not working line, just a stub: > EXECUTE 'INSERT INTO ' || TG_TABLE_NAME || ' SELECT $1 ' USING NEW; > */ > > RETURN NULL; > > END; > $$ LANGUAGE 'plpgsql' VOLATILE; > > The function should be used at different tables and is invoked after > UPDATEs. Everything what happens is the function call of > addContentRevision. After this call all data (with the updated revision > column) should be stored in the table as a new row. > > My problem: the aim-table is not static. It's just always the table > which invoked the trigger. The trigger itself could be called at many > tables. I've tried some other ways of expressing the INSERT but nothing > worked: > - 'INSERT INTO ' || TG_TABLE_NAME || ' SELECT NEW.*' > - INSERT INTO TG_TABLE_NAME SELECT NEW.* > - EXECUTE 'INSERT INTO ' || TG_TABLE_NAME USING NEW; > > Do you have any hints? > > Greetings from Germany, > Torsten > > -- > Sent via pgsql-sql mailing list ([email protected]) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-sql
[SQL] slow view
I am having a problem with the performance of a
view that will be a critical part of a database system
I am working on, and would appreciate some advice.
Apologies for the length of this posting!
I have a parent table P and to child tables A and B,:
CREATE TABLE p (
id INT NOT NULL PRIMARY KEY);
CREATE TABLE a (
id INT NOT NULL PRIMARY KEY,
p INT REFERENCES p(id));
CREATE INDEX ON a(p);
CREATE TABLE b (
id INT NOT NULL PRIMARY KEY,
p INT REFERENCES p(id));
CREATE INDEX ON b(p);
Each "p" row has between 1 and 5 (or so)
child rows in the "a" table, and between 0 and 4
rows in the "b" table.
Now for most p's the a's and the b's are independent,
and all combinations of a's and b's for that p are ok.
But for a small percentage of p's (<5%) there are some
combinations of a's and b's are distinguished (I will call
them "invalid").
So I created a table to record these "invalid" combinations:
CREATE TABLE x (
a INT NOT NULL REFERENCES a(id),
b INT NOT NULL REFERENCES b(id),
PRIMARY KEY (a,b));
CREATE INDEX ON x(a);
CREATE INDEX ON x(b);
Here is some sample data with a single p:
# Create one parent item...
INSERT INTO p VALUES(1)
# Create 4 a-items for that parent...
INSERT INTO a VALUES(1,1)
INSERT INTO a VALUES(2,1)
INSERT INTO a VALUES(3,1)
INSERT INTO a VALUES(4,1)
# Create 3 b-items for that parent...
INSERT INTO b VALUES(11,1)
INSERT INTO b VALUES(12,1)
INSERT INTO b VALUES(13,1)
So for parent p=1, there are 12 combinations
of a and b items (each of the 4 a items can be
paired with any of the 3 b items).
Now, make some combinations of a items
and b items "invalid"...
# For a=2, make b=13 invalid, i.e only b=11 and b=12 are valid.
INSERT INTO x VALUES(2,13)
# For a=3, only b=11 is valid.
INSERT INTO x VALUES(3,12)
INSERT INTO x VALUES(3,13)
# For a=4, no b's are valid.
INSERT INTO x VALUES(4,11)
INSERT INTO x VALUES(4,12)
INSERT INTO p VALUES(4,13)
Now I need a view that will display, for each p, its
a's and for each a, only the valid b's, that is, the
combinations of a and b that are *not* in table x.
OK, no problem...
(#1)
SELECT p.id AS pid, a.id AS aid, b.id AS bid
FROM p
JOIN a ON a.p=p.id
LEFT JOIN b ON b.p=a.p
LEFT JOIN x ON x.a=a.id AND x.b=b.id
WHERE x.a IS NULL
AND p.id=1;
Here is the result on the data given above:
pid | aid | bid
-+-+-
1 | 1 | 11
1 | 1 | 12
1 | 1 | 13
1 | 2 | 11
1 | 2 | 13
1 | 3 | 11
1 | 3 | 12
Ok, but I want all a's in the output, even when they
have no valid b's. aid=4 is not in the output.
So I did what I thought was the obvious answer, a
left join between a, and the above query...
(#2)
SELECT p.id AS pid, a.id AS aid, sub.bid AS bid
FROM p
JOIN a ON a.p=p.id
LEFT JOIN (
SELECT a.id AS aid, b.id as bid
FROM a
LEFT JOIN b ON b.p=a.p
LEFT JOIN x ON x.a=a.id AND x.b=b.id
WHERE x.a IS NULL
) AS sub ON sub.aid=a.id
WHERE p.id=1;
Results:
pid | aid | bid
-+-+-
1 | 1 | 11
1 | 1 | 12
1 | 1 | 13
1 | 2 | 11
1 | 2 | 13
1 | 3 | 11
1 | 3 | 12
1 | 4 |
Exactly what I want.
The problem is that when there are ~100K parent entries
the above query (#2) takes ~10 seconds to run but the first
query (#1) runs in a few tens of milliseconds.
Is there any way I can get postgresql to better optimize
query #2, or rewrite it to that is is more "postgresql friendly"?
~~~
If it helps, here is a python script I used to generate enough
pseudo-data to show the time difference...
#!/usr/bin/python
import psycopg2, random
def main():
cn = psycopg2.connect (database="test",
user="postgres",password="")
c = cn.cursor()
pkp = 1; pka = 1; pkb = 1;
while pkp < 3:
c.execute ("INSERT INTO p VALUES(%s)", (pkp,))
na = random.randint(1,5)
for a in range(na):
c.execute ("INSERT INTO a VALUES(%s,%s)", (pka+a,pkp))
nb = random.randint(0,4)
for b in range(nb):
c.execute ("INSERT INTO b VALUES(%s,%s)", (pkb+b,pkp))
if na*nb > 1 and random.randint (0,99) < 10:
zlst = [(a,b) for a in range(pka,pka+na)
for b in range(pkb,pkb+nb)]
for z in random.sample (zlst, random.randint(1,na*nb-1)):
c.execute ("INSERT INTO x VALUES(%s,%s)", z)
pkp += 1; pka += na; pkb += nb
cn.commit()
if __name__ == '__main__': main ()
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Re: [SQL] data dependent sequences?
"Ragnar" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > On sun, 2007-07-15 at 09:28 -0600, Stuart McGraw wrote: > > Advice requested :-) I have a table like: > > > > CREATE TABLE items ( > > id INT, > > typ INT... > > PRIMAY KEY (seq,typ)); > > > > I would like 'id' to be like a SERIAL except that I > > want independent sequences for each value of 'typ'. > > what possible advantage could there be to that? > > if you need gapless series, then sequences (and serial) > are not adequate anyways. > > just use one sequence. I am not looking for gapless sequences. The reason I want to do this is the "typ" column is actually an indicator of the source of the rest of the infomation in the row. The "rules" for assigning the id number vary depending on the source -- in some cases they start at 1 and increment by one, in other cases they start at, say, 100 and increment by 10. There are a lot existing data using these rules and I cannot change that. I can of course have the application do the assignments, but in general eould prefer to push this down into the database if posible. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] data dependent sequences?
"chester c young" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > > > > CREATE TABLE items ( > > id INT, > > typ INT... > > PRIMAY KEY (seq,typ)); > > > > >id typ > > +- > > 1 'a' > > 2 'a' > > 3 'a' > > 1 'b' > > 4 'a' > > 2 'b' > > you will need to use pre insert trigger since you cannot use column > references in default expression. > > you could use this same trigger to either: > - create sequences as needed and apply the right one > - with locking, lookup for last id of typ > - with locking, keep another table of typ and nextval Thanks, that summerizes the options nicely. I noticed that sequences are tables with a single row that defines the sequence properties. I was hoping that there was some way of using a sequence with multiple rows to maintain multiplre sequences in a sngle table, which would make having a large number of sequences a little less cluttered, but sounds like I need to implement that from scratch. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] data dependent sequences?
"Andrew Sullivan" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > On Tue, Jul 17, 2007 at 07:34:26AM -0600, Stuart wrote: > > I am not looking for gapless sequences. The reason I > > want to do this is the "typ" column is actually an indicator > > of the source of the rest of the infomation in the row. > > Why do you need the sequence to be 1. . .n for each typ, then? If > they're just there to preserve order, one sequence will work just > fine. Otherwise, I think you have a normalisation problem. I probably shouldn't have indicated that 'typ' was part of the PK, or named the other column 'id'. There is actually a separate (surrogate) PK, and there is a unique index on the on (id,typ) only to prevent accidental dupicates. So 'id' is not really structually important -- it is a value that exists soley for the UI. In the app, the user can explicity request an explicit 'id' value. My desire to use a sequence to assign them is to handle the 99% common case where the user doesn't care about assigning a specific id, and just wants the "next" resonable value, consistent with the other values for that typ row. If there are 3 rows of typ=20 and 200 rows of typ=21, I don't want the next typ=20 row to get an id of 204, when the other rows have values of 1,2,3. This is simply a user expectation, based on existing data, that I can't change. I would just prefer to implement it in the database if possible rather than than the app. Hope I have clarified a little :-) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] data dependent sequences?
"Andrew Sullivan" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > On Tue, Jul 17, 2007 at 09:40:21AM -0600, Stuart wrote: > > is not really structually important -- it is a value that > > exists soley for the UI. > > Hmm. Maybe you should use generate_series() for the UI instead? It > would always give you the order you like, you could use the universal > sequence or whatever for your ORDER BY clause, and not store data > that you actually don't care about. Except that the value does matter outside of the database and thus needs to be remembered. It is not used for ordering at all. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[SQL] Test
Please ignore. Thanks
[SQL] Viewing a function
How can a view a function after it's created ? I've created a SQL function that I need to review and possibly change. What is the best way to go about this. TIA
RE: [SQL] How can I select all of the tables with field name 'area'?
You should be able to query the system table that holds that column names. Not sure which it is tho. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of G. Anthony Reina Sent: Monday, September 11, 2000 4:38 PM To: [EMAIL PROTECTED] Subject: [SQL] How can I select all of the tables with field name 'area'? I have a database with several tables. I'd like to pull out a list of names for the tables that contain the field (class) name 'area'. Can this be done? -Tony
[SQL] Where to get benchmark testing
I'm not sure this is the right list for this but maybe someone could point me in the right direction. We are getting some DB servers to test performance. I was wondering if anyone knows where we can get a test script of sorts that would allow us to check performance on each of the servers. TIA
RE: [SQL] sum of agreggates in one SELECT?
Could it be that the first query is using max which will only return the max for b.lot and b.price.. so maybe. select sum(max(b.lot))as quantity, max(b.price) as price from bid b, person p where b.auction_id = 84 and p.id = b.person_id ; Just a thought. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Louis-David Mitterrand Sent: Tuesday, September 19, 2000 3:59 AM To: [EMAIL PROTECTED] Subject: [SQL] sum of agreggates in one SELECT? Hello, I have the following query/result: auction=# select max(b.lot) as quantity,max(b.price) as price,p.login from bid b, person p where b.auction_id = 84 and p.id = b.person_id group by p.login order by max(price); quantity | price | login --+---+--- 1 | 5000 | papy 12 | 5750 | cunctator 8 | 6000 | vindex (3 rows) Now I would like to sum() all results from the quantity column and return it with one SELECT statement. Is that possible? I am trying: auction=# select sum(b.lot) from bid b, person p where b.auction_id = 84 and p.id = b.person_id ; sum - 52 (1 row) But this is wrong because it sums all quantities. I don't know how to apply a valid WHERE clause in that case. Thanks in advance for any help, cheers, -- Louis-David Mitterrand - [EMAIL PROTECTED] - http://www.apartia.org
RE: [SQL] How to represent a tree-structure in a relational database
> The way I'd do it if I had to do it again: > Give each record a unique id, generated by the application. > Denote levels with extra letters. > > So: > >AA - Automotive transport > - Cars >AAAB - Motorcycles > > The structures has the added bonus of making it very easy to > determine all the > sub-categories of a category, no matter how deep the tree is > below the category > you're looking at. With the first approach it is not possible > to do this in a > single SQL query. You could do this with a function, I guess. The problem with this method is if you need to insert a category, or move a category. You'll need to re-id a bunch of categories, and bubble those changes out to every table that refers to this table. Stuart Statman Director of Software Development Slam Media, Inc. BEGIN:VCARD VERSION:2.1 N:Statman;Stuart FN:Stuart Statman ORG:Slam Media, Inc. TITLE:Director of Software Development TEL;WORK;VOICE:(206) 391-0187 ADR;WORK;ENCODING=QUOTED-PRINTABLE:;;Slam Media, Inc.=0D=0A800 5th Ave. #101-296;Seattle;WA;98104;United States= of America LABEL;WORK;ENCODING=QUOTED-PRINTABLE:Slam Media, Inc.=0D=0A800 5th Ave. #101-296=0D=0ASeattle, WA 98104=0D=0AUnit= ed States of America EMAIL;PREF;INTERNET:[EMAIL PROTECTED] REV:2910T063546Z END:VCARD
RE: [SQL] How to represent a tree-structure in a relational database
> What I am thinking now is that you would keep the index > in a separate index table linked with the primary > key in the articles table), which would have 6 or 7 fields > initially, and that you'd add columns with the alter table > command, if need be, to make the structure deeper. I would suggest, instead, to create a table that represents your hierarchy without adding columns. For example : create table Category ( CategoryID int4 not null primary key, ParentCategoryID int4 not null REFERENCES Category (CategoryID), CategoryName varchar(100) ); Add a CategoryID with an FK reference to this table, and your work is done. Then adding, inserting, removing, or moving layers in the hierarchy becomes quite simple. This also preserves hierarchical integrity, where subcategory a of subcategory b will also remain a subcategory of category c if subcategory b is a subcategory of subcategory c, where I'm not sure your model will preserve or guarantee that. (Does that sentence deserve a prize?) In general, if you know that you will need to periodically alter a table to add columns, you should come up with a different model that doesn't require adding columns. Stuart Statman Director of Software Development Slam Media, Inc. BEGIN:VCARD VERSION:2.1 N:Statman;Stuart FN:Stuart Statman ORG:Slam Media, Inc. TITLE:Director of Software Development TEL;WORK;VOICE:(206) 391-0187 ADR;WORK;ENCODING=QUOTED-PRINTABLE:;;Slam Media, Inc.=0D=0A800 5th Ave. #101-296;Seattle;WA;98104;United States= of America LABEL;WORK;ENCODING=QUOTED-PRINTABLE:Slam Media, Inc.=0D=0A800 5th Ave. #101-296=0D=0ASeattle, WA 98104=0D=0AUnit= ed States of America EMAIL;PREF;INTERNET:[EMAIL PROTECTED] REV:2910T063546Z END:VCARD
RE: [SQL] How to represent a tree-structure in a relational database
[Josh Berkus] > I've done this before for one project. Here's what you do: > > CREATE TABLE sample_heirarchy ( > unique_id SERIAL CONSTRAINT PRIMARY KEY, > node_linkup INT4, > node_level INT2, > label VARCHAR(30) > datawhatever > ); > > Then you use the unique_id and node_linkup fields to create a heirarchy > of data nodes, with an indefinite number of levels, where the > node_linkup of each lower level equals the id of its parent record. For > example: > > id linkup level label data > 3 0 1 Node1 Node1 > 4 3 2 Node1.1 Node1.1 > 6 3 2 Node1.2 Node1.2 > 7 6 3 Node1.2.1 Node1.2.1 > 5 0 1 Node2 Node2 I don't think I'd be comfortable with having the node_level column in the table structure. First, because you can derive that value using a function, it's duplicate data. Second, if you decide to take an entire segment of your hierarchy and move it under another node (by changing the value of node_linkup/ParentCategoryID), you'll need to recalculate all of those node_level values. And all the node_level values underneath it. > You can then access the whole heirarchy through moderately complex, but > very fast-executing UNION queries. The one drawback is that you need to > know in advance the maximum number of levels (3 in this example), but > I'm sure someone on this list can find a way around that: I can think of another way to do this, though it would be a little complex and would involve temp tables. Select all of your top level nodes into a temp table. Create a new table with a new column for the new level. Select the children of the top level nodes into the temp table, followed by those top level nodes themselves, with a 0 in the new column and a flag indicating not to expand again. Create a new temp table just like the last but with another column for the new level, and repeat the above process from the first temp table to the second, only expanding the latest children, but copying all records over. Keep doing it until there are no more new children. Alternately, if you didn't need each level to have it's own column, but didn't mind an x.x.x.x kind of notation, you could use one temp table, and just append '.0' to the end of every copied-over parent node. Basically, both methods are simulations of recursing the tree, but you get to do each level all at once using an insert ... select. If you wanted, you could even use a counter, to identify which level each node appeared in. Clearly, this could also be done with cursors and recursive > 4. My PHP developer has reprogrammed the easily available PHP Tree > Control to uses this table structure (I don't know if he's giving it > out, but he said it wasn't very difficult). We've done a similar thing for Java. It was ridiculously easy to create a TreeModel wrapped around this data. Almost too easy; it made me feel dirty. Stuart Statman Director of Software Development Slam Media, Inc. BEGIN:VCARD VERSION:2.1 N:Statman;Stuart FN:Stuart Statman ORG:Slam Media, Inc. TITLE:Director of Software Development TEL;WORK;VOICE:(206) 391-0187 ADR;WORK;ENCODING=QUOTED-PRINTABLE:;;Slam Media, Inc.=0D=0A800 5th Ave. #101-296;Seattle;WA;98104;United States= of America LABEL;WORK;ENCODING=QUOTED-PRINTABLE:Slam Media, Inc.=0D=0A800 5th Ave. #101-296=0D=0ASeattle, WA 98104=0D=0AUnit= ed States of America EMAIL;PREF;INTERNET:[EMAIL PROTECTED] REV:2910T063546Z END:VCARD
Re: [SQL] Why would this slow the query down so much?
On Monday 15 October 2001 16:12 pm, Tom Lane wrote: > Stuart Grimshaw <[EMAIL PROTECTED]> writes: > > SELECT a.category, b.headline, b.added, c.friendlyname > > FROM caturljoin as a > > INNER JOIN stories as b ON (a.url = b.source) > > INNER JOIN urllist as c ON (a.url = d.urn) > > WHERE a.category = 93 ORDER BY b.added DESC LIMIT 1; > > (I assume "d.urn" is a typo for "c.urn"...) > > The query plan you show looks pretty reasonable if the planner's row > count estimates are in the right ballpark. How many caturljoin rows > have category = 93? How many stories rows will match each caturljoin > row? How many urllist rows ditto? There are 194 rows in caturljoin where url = 93, 29806 rows in stories will match those 194 rows and only 1 row in urllist will match. -- | Stuart Grimshaw <[EMAIL PROTECTED]> | Chief Operations Officer | Football Networks Ltd |- | t:07976 625221 | f:0870 7060260 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[SQL] Why would this slow the query down so much?
I have 3 tables that I am trying to join together: -- Table "caturljoin" Attribute | Type | ---+-+ category | integer | url | integer | Index: caturljoin_url caturljoin_cat Table "stories" Attribute | Type | -++ urn | integer| headline| character varying | author | character varying | source | integer| story | text | added | date | description | character varying | displayall | smallint | fullurl | character varying(255) | publish | smallint | error | integer| sourceurl | character varying(255) | sourcename | character varying(100) | rank| smallint | Indices: stories_added, stories_source, stories_unique_story, stories_urn_key Table "urllist" Attribute | Type | --++ urn | integer| url | character varying(255) | friendlyname | character varying(30) | homepage | character varying(255) | method | smallint | script | character varying(20) | params | character varying(500) | collect | smallint | section | smallint | index_script | character varying | regexp | character varying(100) | baseurl | character varying(75) | Index: urllist_urn -- With the following SQL: -- SELECT a.category, b.headline, b.added, c.friendlyname FROM caturljoin as a INNER JOIN stories as b ON (a.url = b.source) INNER JOIN urllist as c ON (a.url = d.urn) WHERE a.category = 93 ORDER BY b.added DESC LIMIT 1; -- The results of explain for the above are: -- psql:scratch.sql:5: NOTICE: QUERY PLAN: Limit (cost=1587.30..1587.30 rows=1 width=44) -> Sort (cost=1587.30..1587.30 rows=1 width=44) -> Merge Join (cost=249.89..1587.29 rows=1 width=44) -> Sort (cost=249.89..249.89 rows=409 width=28) -> Nested Loop (cost=0.00..232.15 rows=409 width=28) -> Index Scan using caturljoin_cat on caturljoin a (cost=0.00..5.09 rows=7 width=8) -> Index Scan using stories_source on stories b (cost=0.00..34.41 rows=29 width=20) -> Index Scan using urllist_urn on urllist c (cost=0.00..1323.69 rows=505 width=16) EXPLAIN -- and as you might be able to guess the query takes an age to complete. If I remove the table urllist from the query, I get a much better response: -- psql:scratch.sql:4: NOTICE: QUERY PLAN: Limit (cost=0.00..207.74 rows=1 width=28) -> Nested Loop (cost=0.00..84945.18 rows=409 width=28) -> Index Scan Backward using stories_added on stories b (cost=0.00..2310.04 rows=16149 width=20) -> Index Scan using caturljoin_url on caturljoin a (cost=0.00..5.10 rows=1 width=8) EXPLAIN -- Currently the tables contain the following rows of data: -- caturljoin: 653 rows urllist: 505 rows stories: 21554 rows -- Can anyone tell me why the inclusion of urllist would slow it down so much, and what can I do to improve the speed of the query? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[SQL] Copying a rowtype variable.
Hello, Just doing a bit of PL/PGSQL so my first question is: 1) should PL/PGSQL questions be posted to the general mailing list, the sql mailing list or both? My second question is: 2) I am looking for an elegant way of copying a rowtype variable: eg. DECLARE current_row orf%ROWTYPE; previous_row orf%ROWTYPE; BEGIN LOOP -- use cursors or FOR SELECT to get values into current_row -- now try this: previous_row = current_row; END LOOP; END; Now, as I anticipated, this fails because a rowtype variable is a composite variable. One working alternative is to do: previous_row.pid = current_row.pid; previous_row.start = current_row.start; -- and so on so forth for all current_row variables But this is inconvenient of the row has many fields and impossible if you want a flexible function which can accomodate rowtypes for which the fields are not known "a priori". Any clever workarounds? Cheers, Stuart. PS. I started using PGSQL five years ago... then I had to leave it alone for a while... I can't believe how much it's grown and developed. AMAZING, congrats to all developers. ---- -- Stuart C. G. Rison Department of Pathology and Infectious Diseases Royal Veterinary College London. United Kingdom. "That's bioinformatics... and we don't want to be doing bioinformatics, we want to do real science.", Sydney Brenner, UCL Pfizer Lecture, 8 May 2001 -- ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Copying a rowtype variable.
>> 2) I am looking for an elegant way of copying a rowtype variable: >> >> eg. >> >> DECLARE >> current_row orf%ROWTYPE; >> previous_row orf%ROWTYPE; >> BEGIN >> >> LOOP >> -- use cursors or FOR SELECT to get values into current_row >> -- now try this: >> >> previous_row = current_row; >> END LOOP; >> END; >> >> Now, as I anticipated, this fails because a rowtype variable is a >> composite >> variable. One working alternative is to do: >> > > I haven't tried this. One thing I notice above is that you're using > the equality operator "=" instead of the assignment operator ":=" . > Usually Postgres lets you slack on this, but it would be worth trying > to see whether that has an effect on the problem. > Fair point. But "previous_row := current_row" doesn't work either. > Another thing to try is, instead of a simple variable assignment > > SELECT current_row INTO previous_row; > > ... and see if that works. Well, I had high hopes for that one... but it didn't work either! > I'll tinker later today; there has to be a way to do it. I'd definitely appreciate further suggestions, but thanks all the same for you help. I have a feeling that you might have to write a PL function to perform the operation... but I haven't really thought about it! Stuart. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Copying a rowtype variable.
Josh has submitted an appropriate report to pgsql-bugs... so perhaps it will happen! Stuart. > -Original Message- > From: Jean-Luc Lachance [mailto:jllachan@;nsd.ca] > Sent: 07 November 2002 16:29 > To: Rison, Stuart > Cc: ''[EMAIL PROTECTED]' '; 'Josh Berkus ' > Subject: Re: [SQL] Copying a rowtype variable. > > I would personnaly like this feature (assigning a composite from another > similar composite) to be added to PLPGSQL. Another nice feature would be > to able to insert a composite into a table without have to name all > atributes. > > Just my $.02 > > "Rison, Stuart" wrote: > > >> 2) I am looking for an elegant way of copying a rowtype variable: > >> ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] Two TIMESTAMPs in one pl/sql function
Hi, I'm trying to time a pl/sql function using a rougn and ready method, basically: print a TIMESTAMP at the begining of the function, print a TIMESTAMP at the end of the function. So...: CREATE OR REPLACE FUNCTION timer() RETURNS INTEGER AS ' DECLARE timer1 TIMESTAMP; timer2 TIMESTAMP; num_operators INTEGER; BEGIN timer1 := ''now''; -- As suggested in 23.4 of programmer guide RAISE NOTICE ''Start: %'', timer1; /* Some function which takes time. Here, a select from a pg catalogue */ SELECT INTO num_operators COUNT(*) FROM pg_operator; timer2 := ''now''; RAISE NOTICE ''End: %'', timer2; RETURN(num_operators); END;' LANGUAGE 'plpgsql'; Gives me: testdb2=# select timer(); NOTICE: Start: 2002-11-26 13:40:14.116605+00 NOTICE: End: 2002-11-26 13:40:14.116605+00 timer --- 623 (1 row) I've tried all sorts of variations but I hit one of two problems; either the TIMESTAMP is fixed to the function compile time or, as above, the timer1 and timer2 TIMESTAMPs are always identical. Any help/pointers/suggestions appreciate... well of course a working solution would be the best ;) Cheers, Stuart. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[SQL] [SQL/BUGS] (possible) inconsistent behaviour of timestamp?
Here's an "interesting" timestamp related postgreSQL quirk:
testdb2=# select "timestamp"('now');
timestamp
2002-11-26 13:47:12.454157
(1 row)
testdb2=# select 'now'::timestamp;
timestamptz
--
2002-11-26 13:47:34.88358+00
(1 row)
testdb2=# select timestamp 'now';
timestamptz
---
2002-11-26 13:47:47.701731+00
(1 row)
The first SELECT returns a 'timestamp', but the next two return a
'timestamptz' (presumably with timezone); is this inconsitent behaviour?
Cheers,
Stuart.
Royal Veterinary College
London, UK
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
Re: [SQL] SQL query problem
Marek, Not sure but, try switching the lines db_data.mda_mod_con _CON, db_data.set_mda_fue _FUE with db_data.set_mda_fue _FUE, db_data.mda_mod_con _CON so there query is: SELECT _CON.con_id, _MOD.mod_ty, _VER.version, _YEA.year, _CON.dri_id, _CON.man_cod, _ENG.eng_pow FROM db_data.mda_mod _MOD, db_data.mda_mak_eng _ENG, db_data.set_mda_fue _FUE, db_data.mda_mod_con _CON LEFT JOIN db_data.mda_mod_ver _VER ON _VER.ver_id = _CON.ver_id LEFT JOIN db_data.mda_mod_yea _YEA ON _YEA.yea_id = _CON.yea_id WHERE _MOD.mod_id = '283' AND _CON.mod_id = _MOD.mod_id AND _CON.psd <= NOW() AND _CON.ped > NOW() AND _ENG.eng_id = _CON.eng_id AND _ENG.eng_fue = _FUE.fue_id --- Marek Lewczuk <[EMAIL PROTECTED]> wrote: > Hello, > I'm in the middle of the migration process from > MySQL to PostgreSQL and > I cannot understand why this query isn't working (in > MySQL it's working > fine). PG returns: ERROR: Relation "_con" does not > exist > > This is my query: __ Do you Yahoo!? The New Yahoo! Shopping - with improved product search http://shopping.yahoo.com ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] REPOST[GENERAL] Quoting for a Select Into - Please Help
Alex, Postgres's "select into" does not work like oracle. The "select into" for postgresql creates another table. Try this; select aliasvalue || trim(arrayval[i]) into newtablename from currenttablename where trim(searchfield) like '%' || trim(searchvalue) || '%'; Stuart --- A E <[EMAIL PROTECTED]> wrote: > Hi, > > Could someone help me with quoting this right? > > select into aliasvalue ''|| trim(arrayval[i]) ||'' > from ''|| trim(realname) ||'' where ''|| > trim(searchfield) ||'' like ''''%''|| > trim(searchvalue) ||''%''''; > > The parser does not seem to want to put the value of > the variables into the statement. > > Alex > > __ Do you Yahoo!? Yahoo! Hotjobs: Enter the "Signing Bonus" Sweepstakes http://hotjobs.sweepstakes.yahoo.com/signingbonus ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] slow view
On 2006/10/11 Stuart wrote: > [..] Apologies for following up my own post, but after struggling with that query for over a day, I figured out the answer within thirty minutes of posting. (Sigh) My slow query was: > SELECT p.id AS pid, a.id AS aid, sub.bid AS bid > FROM p > JOIN a ON a.p=p.id > LEFT JOIN ( > SELECT a.id AS aid, b.id as bid > FROM a > LEFT JOIN b ON b.p=a.p > LEFT JOIN x ON x.a=a.id AND x.b=b.id > WHERE x.a IS NULL > ) AS sub ON sub.aid=a.id > WHERE p.id=1; Including the parent table p in the inner select was all it took. SELECT p.id AS pid, a.id AS aid, sub.bid AS bid FROM p JOIN a ON a.p=p.id LEFT JOIN ( SELECT p.id AS pid, a.id AS aid, b.id as bid FROM p JOIN a ON a.p=p.id LEFT JOIN b ON b.p=a.p LEFT JOIN x ON x.a=a.id AND x.b=b.id WHERE x.a IS NULL ) AS sub ON sub.aid=a.id AND sub.pid=p.id WHERE p.id=1; ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[SQL] Constraint on multicolumn index
Hi, I am not sure this can be done but I'm trying to constrain a sorted set efficiently using a multicolumn index in postgres. The (simplified) scenario is this: CREATE TABLE T ( a INT, b INT, c INT ); CREATE INDEX t_idx ON T(a,b,c); Now I can sort using t_idx: select * from T order by a,b,c; -- all good, seq scan using t_idx I can constrain on a single variable fine: select * from T where (a=10 AND b=100 AND c>1000) order by a,b,c; -- does seq scan on t_idx and uses the index in the constraint as expected But if I want the next item following t=(a=10,b=100,c=1000): select * from T where (a=10 AND b=100 AND c>1000) OR (a=10 AND b>100) OR (a>10) order by a,b,c; then it just does an ordinary filter, and basically does a sequence scan with no intelligence which isn't great if you've got a table of 20 million items. Is there any way short of issuing 3 queries and joining them that I can do this? I had hoped to be able to compare (a,b,c)>(10,100,1000) but of course that evaluates to (a>10) and (b>100) and (c>1000). It feels like there should be a simple solution to this... please help :) Thanks Stuart ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] Constraint on multicolumn index
> > But if I want the next item following t=(a=10,b=100,c=1000): > > > select * from T > > where (a=10 AND b=100 AND c>1000) OR (a=10 AND b>100) OR (a>10) > > order by a,b,c; > > The correct way to handle this is to use a SQL-spec row comparison: > > where (a,b,c) > (10,100,1000) > > Unfortunately, that syntax does not work per-spec in any existing > Postgres release. It will work properly (and use the index) in > PG 8.2, for what that's worth. > > Not sure if there's any reasonable workaround in PG <= 8.1. > You might want to check this old thread: > http://archives.postgresql.org/pgsql-performance/2004-07/msg00188.php > (note that none of the first few responses got the point :-() Also > http://archives.postgresql.org/pgsql-hackers/2006-02/msg00209.php Thanks for the response. PG 8.2 looks like a good option when it is finalized (I see it is beta 3 so shouldn't be too long should it?), but I also need to have a closer look at the row constructor - this is a new one for me as I have been using MySQL up til now. The best solution I could come up with for my problem was to do a union - something like: (SELECT * from T WHERE (a=10 AND b=100 AND c>1000) ORDER BY a,b,c LIMIT 10) UNION (SELECT * from T WHERE (a=10 AND b>100) ORDER BY a,b,c LIMIT 10) UNION (SELECT * from T WHERE (a>10) ORDER BY a,b,c LIMIT 10) ORDER BY a,b,c LIMIT 10; which would use an index for each of the selects and then have to merge, sort and limit the results. This seemed to work although it gets clumsy if there are a whole lot of extra criteria. Thanks again for the help, Stuart ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[SQL] equiv of ascii() function for unicode?
Does postgresql have a function that will give me the numeric unicode code point for a character in a unicode (aka utf8) database text string? That is, something like ascii() but that works for unicode characters? ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[SQL] slowness when subselect uses DISTINCT
I have several times now run into what seems like similar performance problems with some of my postgresql queries. I have a view that runs reasonably quicky. I use this view in a subselect in another query and that query too runs reasonably quicky. The view returns some unwanted duplicate rows so I modify it using either DISTINCT or GROUP BY to eliminate them. View still runs reasonably quickly. I use the modified view as a subselect as above, but now the query runs 2-3 orders of magnitude more slowly than before. Before I go through the effort of putting together a specific and concise test case, has anyone seen this general pattern and have an explanation or advice? (PG-8.2.3) ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] slowness when subselect uses DISTINCT
Phillip Smith wrote: > May I suggest you post an EXPLAIN ANALYZE to the group for the query you're > having problems with...? I will do that but it has happened to me enough that it seems to be a general pattern, not something specific to one of my queries, so I thought some communal knowledge may exist. But apparently not. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[SQL] data dependent sequences?
Advice requested :-) I have a table like:
CREATE TABLE items (
id INT,
typ INT...
PRIMAY KEY (seq,typ));
I would like 'id' to be like a SERIAL except that I
want independent sequences for each value of 'typ'.
So if 'items' is:
id typ
+-
1 'a'
2 'a'
3 'a'
1 'b'
then doing:
INSERT items(typ) VALUES('a');
INSERT items(typ) VALUES('b');
will result in:
id typ
+-
1 'a'
2 'a'
3 'a'
1 'b'
4 'a'
2 'b'
Because there can be dozens of values of 'typ'
and new ones added not infrequently, creating
a postgresql sequence for each seems awkward.
Are there other ways I could accomplish this?
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
[SQL] TIMESTAMP comparison problem
I have a problem in comparing a TIMESTAMP field with a timestamp
literal. I presume it is to do with the floating point representation of
the timestamp but I was wondering if there is an easy work around
without having to recompile postgres to use integer datetimes.
Basically if I issue a "SELECT * FROM T WHERE tstamp>'x';" I get
x as my first field.
If I reduce the precision to 3 for the timestamps it appears to work
although it makes me nervous.
I am running postgresql 8.2.5 on NetBSD 3.
Should I just recompile to use integer datetimes? I would like to have
at least microsecond precision.
Thanks
Stuart
Table definition:
db=> \d+ Transactions;
Table "test.transactions"
Column | Type |
Modifiers transaction_key | bigint
| not null default
nextval('transactions_transaction_key_seq'::regclass) |
time| timestamp(6) without time zone | not null
Indexes:
"transactions_pkey" PRIMARY KEY, btree (transaction_key)
"transactions_time_index" btree ("time", transaction_key)
Has OIDs: no
Table contents:
db=> select transaction_key,time from Transactions;
transaction_key |time
-+
1 | 2008-01-22 09:33:34.681693
2 | 2008-01-22 09:33:34.98421
3 | 2008-01-22 09:33:36.270745
4 | 2008-01-22 09:33:38.573363
5 | 2008-01-22 09:33:38.496988
6 | 2008-01-22 09:33:39.995707
7 | 2008-01-22 09:33:40.111784
8 | 2008-01-22 09:33:41.415505
9 | 2008-01-22 09:33:42.328298
10 | 2008-01-22 09:33:42.025126
11 | 2008-01-22 09:33:44.802205
12 | 2008-01-22 09:33:45.257675
13 | 2008-01-22 09:33:46.746349
14 | 2008-01-22 09:33:46.513937
15 | 2008-01-22 09:33:46.735079
16 | 2008-01-22 09:33:47.528806
17 | 2008-01-22 09:33:49.20255
18 | 2008-01-22 09:33:51.724916
19 | 2008-01-22 09:33:52.550102
20 | 2008-01-22 09:33:54.698312
(20 rows)
Query with problem:
metadb=> select transaction_key,time from Transactions where time>'2008-01-22
09:33:46.746349';
transaction_key |time
-+
13 | 2008-01-22 09:33:46.746349 *** THIS SHOULDN'T BE HERE
16 | 2008-01-22 09:33:47.528806
17 | 2008-01-22 09:33:49.20255
18 | 2008-01-22 09:33:51.724916
19 | 2008-01-22 09:33:52.550102
20 | 2008-01-22 09:33:54.698312
(6 rows)
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
Re: [SQL] TIMESTAMP comparison problem
If I reduce the precision to 3 for the timestamps it appears to work although it makes me nervous. With float timestamps, you're fooling yourself if you think those numbers past the decimal are reliable. Should I just recompile to use integer datetimes? I would like to have at least microsecond precision. Well, you can't get better than microsecond precision with timestamps in Postgres. And the only way you can rely on that level of precision is to compile with --enable-integer-datetimes. Michael Glaesemann I thought that might be the case, thanks for the help, Stuart ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[SQL] Slow GROUP BY query
I have a very simple table set: Transactions: transaction_key PRIMARY KEY client TEXT time TIMESTAMP LineItems transaction_key INT amount INT A query to print the contents of transactions with a sum of the line item amounts provides a very suboptimal result. The problem seems to be the GROUP BY clause as it doesn't use the primary index. Rewriting the query to only group on the transaction_key and returning the max of the other transaction fields results in a query of <1ms. (see queries below) Can anyone shed any light here, I would have expected the queries to take roughly the same time? Out of interest, since we are grouping by transaction_key which is unique, surely the other Transaction fields in the group by could be ignored by the planner? Thanks Stuart (running postgresql 8.2.5 on NetBSD 3) >> Slow query EXPLAIN SELECT t.transaction_key,t.cashier,t.time,SUM(l.amount) FROM Transactions t JOIN LineItems l USING (transaction_key) GROUP BY t.transaction_key,t.cashier,t.time ORDER BY t.transaction_key; QUERY PLAN --- Sort (cost=449.16..454.16 rows=2000 width=32) Sort Key: t.transaction_key -> HashAggregate (cost=314.50..339.50 rows=2000 width=32) -> Hash Join (cost=66.00..262.07 rows=5243 width=32) Hash Cond: (l.transaction_key = t.transaction_key) -> Seq Scan on lineitems l (cost=0.00..117.43 rows=5243 width=16) -> Hash (cost=41.00..41.00 rows=2000 width=24) -> Seq Scan on transactions t (cost=0.00..41.00 rows=2000 width=24) (8 rows) Fast query EXPLAIN SELECT t.transaction_key,MAX(t.cashier),MAX(t.time),SUM(l.amount) FROM Transactions t JOIN LineItems l USING (transaction_key) GROUP BY t.transaction_key ORDER BY t.transaction_key; QUERY PLAN - GroupAggregate (cost=0.00..459.11 rows=2000 width=32) -> Merge Join (cost=0.00..371.68 rows=5243 width=32) Merge Cond: (t.transaction_key = l.transaction_key) -> Index Scan using transactions_pkey on transactions t (cost=0.00..86.25 rows=2000 width=24) -> Index Scan using lineitems_transaction_index on lineitems l (cost=0.00..214.90 rows=5243 width=16) (5 rows) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[SQL] Am I wasting my time with partitions?
It seems to me that postgresql doesn't use indexes when being asked for an ordered result sets from a partitioned table. I have an application where this is critical, but I was hoping to use partitions because of the ease of rotating out old rows. Simply put, I have a table called LineItems which I need to be able to page from and so I need to be able to ask for N rows ordered on a certain index (with possible constraints). eg. SELECT * FROM T ORDER BY col1,col2 LIMIT 10; This works fine and is quick on a single table: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> metadb=> \d lineitems Table "test2.lineitems" Column| Type | Modifiers --++--- lineitem_key | bigint | not null time | timestamp(6) without time zone | not null description | text | not null barcode | text | not null amount | bigint | not null Indexes: "lineitems_amount_index" btree (amount, lineitem_key) metadb=> explain select * from lineitems order by amount,lineitem_key limit 10; QUERY PLAN Limit (cost=0.00..0.74 rows=10 width=49) -> Index Scan using lineitems_amount_index on lineitems (cost=0.00..39791.76 rows=535500 width=49) (2 rows) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If I partition the table by creating a top level table L, and inherited tables L1, L2 and issue the same request it does sequential scans on all the tables and takes orders of magnitude longer (see below). In the example below I would have hoped that it would have used an index scan on each of the tables returning 10 rows each and then done a merge on them. Am I asking too much? Should I just use a single table and take the hits on deletes and vacuums? Regards Stuart >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> metadb=> \d L Table "test2.l" Column| Type | Modifiers --++--- lineitem_key | bigint | not null time | timestamp(6) without time zone | not null description | text | not null barcode | text | not null amount | bigint | not null Indexes: "l_amount_index" btree (amount, lineitem_key) metadb=> \d L1 Table "test2.l1" Column| Type | Modifiers --++--- lineitem_key | bigint | not null time | timestamp(6) without time zone | not null description | text | not null barcode | text | not null amount | bigint | not null Indexes: "l1_amount_index" btree (amount, lineitem_key) Inherits: l metadb=> \d L2 Table "test2.l2" Column| Type | Modifiers --++--- lineitem_key | bigint | not null time | timestamp(6) without time zone | not null description | text | not null barcode | text | not null amount | bigint | not null Indexes: "l2_amount_index" btree (amount, lineitem_key) Inherits: l metadb=> explain select * from l order by amount,lineitem_key limit 10; QUERY PLAN - Limit (cost=22207.70..22207.72 rows=10 width=88) -> Sort (cost=22207.70..23548.09 rows=536156 width=88) Sort Key: test2.l.amount, test2.l.lineitem_key -> Result (cost=0.00..10621.56 rows=536156 width=88) -> Append (cost=0.00..10621.56 rows=536156 width=88) -> Seq Scan on l (cost=0.00..16.90 rows=690 width=88)
Re: [SQL] Am I wasting my time with partitions?
It seems to me that postgresql doesn't use indexes when being asked for an ordered result sets from a partitioned table. I have an application where this is critical, but I was hoping to use partitions because of the ease of rotating out old rows. metadb=> explain select * from l order by amount,lineitem_key limit 10; QUERY PLAN -> Seq Scan on l (cost=0.00..16.90 rows=690 width=88) -> Seq Scan on l1 l (cost=0.00..4951.00 rows=25 width=49) -> Seq Scan on l2 l (cost=0.00..5653.66 rows=285466 width=49) NB. Just addressing one of the inherited tables works fine. metadb=> explain select * from l1 order by amount,lineitem_key limit 10; Well, you don't have an index it can use to find the smallest (amount,lineitem) across all of lX. If PG was smart enough to figure out that it only needed to check l1, then you do. Unfortunately it isn't. You're right, it can't determine which of the partitions will have the smallest value, but what it could do is pull the smallest value from each and compare. In the absence of the LIMIT there wouldn't be much which could be done, but the the LIMIT means it only actually needs to pull 10 rows from each partition. An alternative way of doing this would be: (SELECT * FROM L1 ORDER BY amount,lineitem_key LIMIT 10) UNION (SELECT * FROM L2 ORDER BY amount,lineitem_key LIMIT 10) ORDER BY amount,lineitem_key LIMIT 10; Unfortunately this means one can't just address the parent table, but it does essentially what I'd hoped postgres would do for me :) It would be quite a long query if there were 100 partitions! If you add the constraint you use to partition by, does that help you? I tried to strip the example down to its bare essentials but in this case I would be partitioning by lineitem_key and would obviously index and add a CONSTRAINT on that as well. I don't think it would help though, the query needs to merge from all tables. Thanks for the response, Stuart ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[SQL] autovacuum not freeing up unused space on 8.3.0
It appears (and I am open to correction) that autovacuum is not operating correctly in 8.3.0. I have a vanilla installation where autovacuum is enabled, and is running with all the default settings. I have a table which is continually having rows added to it (~50/sec). For the sake of this example I am limiting it to 2 rows, which means that I am continually having to remove rows (100 at a time) as I get to 2. When I get to 2 rows for the first time the table disk size (using pg_total_relation_size) is around 5MB. Since the autovacuum only kicks in after a while I would expect it to get a little bigger (maybe 6-7MB) and then level out as I am cycling through recovered rows. However the table disk size continues increasing basically linearly and when I stopped it it was approaching 40MB and heading up. During that time I was running ANALYZE VERBOSE periodically and I could see the dead rows increase and then drop down as the autovacuum kicked in - the autovacuum worker process was running. It didn't seem to free any space though. In fact a VACUUM FULL at this point didn't help a whole lot either. I ran the same test but using manual VACUUMs every 60 seconds and the table size leveled out at 6.6MB so it appears like a normal vacuum is working. I changed the normal VACUUM to have the same delay parameters (20ms) as the autovacuum and it still worked. So it appears to me like the autovacuum is not freeing up dead rows correctly. I turned on logging for autovacuum and ran the same test and saw the following messages: LOG: automatic vacuum of table "metadb.test.transactions": index scans: 1 pages: 0 removed, 254 remain tuples: 4082 removed, 19957 remain system usage: CPU 0.02s/0.02u sec elapsed 1.11 sec LOG: automatic vacuum of table "metadb.test.transactions": index scans: 1 pages: 0 removed, 271 remain tuples: 5045 removed, 19954 remain system usage: CPU 0.03s/0.03u sec elapsed 1.54 sec ERROR: canceling autovacuum task CONTEXT: automatic vacuum of table "metadb.test.transactions" At this point I had deleted 32800 rows as can be seen from the query below, although the logs only indicated that around 1 rows had been freed up. select min(transaction_key),max(transaction_key) from test.transactions; min | max ---+--- 32801 | 52750 Is there anything I have missed as far as setting this up is concerned, anything I could try? I would really rather use autovacuum than manage the vacuums of a whole lot of tables by hand... Thanks Stuart PS. Running on NetBSD 3 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] autovacuum not freeing up unused space on 8.3.0
It seems like pgsql-general would be the right list for this so I am going to post it there rather, sorry for the noise... It appears (and I am open to correction) that autovacuum is not operating correctly in 8.3.0. I have a vanilla installation where autovacuum is enabled, and is running with all the default settings. I have a table which is continually having rows added to it (~50/sec). For the sake of this example I am limiting it to 2 rows, which means that I am continually having to remove rows (100 at a time) as I get to 2. When I get to 2 rows for the first time the table disk size (using pg_total_relation_size) is around 5MB. Since the autovacuum only kicks in after a while I would expect it to get a little bigger (maybe 6-7MB) and then level out as I am cycling through recovered rows. However the table disk size continues increasing basically linearly and when I stopped it it was approaching 40MB and heading up. During that time I was running ANALYZE VERBOSE periodically and I could see the dead rows increase and then drop down as the autovacuum kicked in - the autovacuum worker process was running. It didn't seem to free any space though. In fact a VACUUM FULL at this point didn't help a whole lot either. I ran the same test but using manual VACUUMs every 60 seconds and the table size leveled out at 6.6MB so it appears like a normal vacuum is working. I changed the normal VACUUM to have the same delay parameters (20ms) as the autovacuum and it still worked. So it appears to me like the autovacuum is not freeing up dead rows correctly. I turned on logging for autovacuum and ran the same test and saw the following messages: LOG: automatic vacuum of table "metadb.test.transactions": index scans: 1 pages: 0 removed, 254 remain tuples: 4082 removed, 19957 remain system usage: CPU 0.02s/0.02u sec elapsed 1.11 sec LOG: automatic vacuum of table "metadb.test.transactions": index scans: 1 pages: 0 removed, 271 remain tuples: 5045 removed, 19954 remain system usage: CPU 0.03s/0.03u sec elapsed 1.54 sec ERROR: canceling autovacuum task CONTEXT: automatic vacuum of table "metadb.test.transactions" At this point I had deleted 32800 rows as can be seen from the query below, although the logs only indicated that around 1 rows had been freed up. select min(transaction_key),max(transaction_key) from test.transactions; min | max ---+--- 32801 | 52750 Is there anything I have missed as far as setting this up is concerned, anything I could try? I would really rather use autovacuum than manage the vacuums of a whole lot of tables by hand... Thanks Stuart PS. Running on NetBSD 3 ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[SQL] changing multiple pk's in one update
Hello all, I have a table with a primary key column that contains sequential numbers. Sometimes I need to shift them all up or down by a fixed amount. For example, if I have four rows with primary keys, 2, 3, 4, 5, I might want to shift them down by 1 by doing: UPDATE mytable SET id=id-1 (where "id" is the pk column) so that the pk's are now 1, 2, 3, 4. When I try to shift them up by using +1 in the above update statement, I get (not surprisingly) a duplicate key error. I also realize that the -1 case above works only by luck. So my question: Is there some way, perhaps with ORDER BY, that I can achieve the change I want with a single update statement? (If I have an unused key range large enough, I suppose I could update all the keys to that range, and then back to my target range but the requires two updates (there are a lot of foreign keys referencing these primary keys) and requires that I have an available range, so a single update statement would be preferable.) Thanks for any enlightenment. -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] changing multiple pk's in one update
Jasen Betts wrote: On 2009-04-08, Stuart McGraw wrote: Hello all, I have a table with a primary key column that contains sequential numbers. Sometimes I need to shift them all up or down by a fixed amount. For example, if I have four rows with primary keys, 2, 3, 4, 5, I might want to shift them down by 1 by doing: UPDATE mytable SET id=id-1 (where "id" is the pk column) so that the pk's are now 1, 2, 3, 4. When I try to shift them up by using +1 in the above update statement, I get (not surprisingly) a duplicate key error. I also realize that the -1 case above works only by luck. So my question: Is there some way, perhaps with ORDER BY, that I can achieve the change I want with a single update statement? (If I have an unused key range large enough, I suppose I could update all the keys to that range, and then back to my target range but the requires two updates (there are a lot of foreign keys referencing these primary keys) and requires that I have an available range, so a single update statement would be preferable.) Thanks for any enlightenment. begin a transaction suspend the constraint (use SET CONSTRAINTS ... DEFERRED) drop the index do the update(s) recreate the index commit the transaction. I see no reason to keep the index (and its associated UNIQUE constraint) during the update, AFAICT all it does is slow the process down. Thanks for the suggestion. Unfortunately I am doing this key renumbering in an interactive gui app and as there are several million rows involved, rebuilding indexes take too long. I have continued searching for other ways to do this but most of my google results are floating in a sea of "update if insert fails, like mysql" results so I still have a little hope it is possible. I thought I remember seeing, a year or two ago, an update statement with an ordered subquery that avoided duplicate key errors but I am probably misrembering. -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] changing multiple pk's in one update
Scott Marlowe wrote: 2009/4/7 Stuart McGraw : Hello all, I have a table with a primary key column that contains sequential numbers. Sometimes I need to shift them all up or down by a fixed amount. For example, if I have four rows with primary keys, 2, 3, 4, 5, I might want to shift them down by 1 by doing: Generally speaking, when you need to do this more than once or twice in the lifetime of your data, there's something wrong with your data model. Generally speaking, I agree. But in this case I am not sure how to do it any better. Simplified, the database models a dictionary. An entry in the dictionary can have a number senses where each sense is a sentence that gives the meaning of the sense. The order of the senses is significant. The sense table has three columns, an fk to the dictionary entry (id number) it is part of, a sense number (small number from 1 to the number of sentences in the entry) and the sentence text. The pk is of course the entry id and the sense number. There are other tables that have fk's to the senses. I could dispense with the sentence number and use the sentence text as the second part of the composite key but the sentence text is far more volatile than the number, and can be very long making it quite unwieldy to use as a key. I guess I could add an "order"[1] column and use the sense number as a surrogate partial key to avoid the need for key renumbering, but all the api's (and the normal human way of thinking) are based on "sense number 1 of entry x", "sense number 2 of entry y", so one would need to maintain "order" as a gapless sequence (or add a new mapping layer to map from/to a arbitrary monotonic sequence to a 1,2,3,... sequence) -- the gain doesn't seem that big. Or maybe there is some other approach? [1] yes, I know "order" is a reserved word. -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] changing multiple pk's in one update
Glenn Maynard wrote: (JMdict? yup. ;-) I was playing with importing that into a DB a while back, but the attributes in that XML are such a pain--and then my email died while I was trying to get those changed, and I never picked it up again.) On Mon, Apr 13, 2009 at 1:20 PM, Stuart McGraw wrote: 1 to the number of sentences in the entry) and the sentence text. The pk is of course the entry id and the sense number. There are other tables that have fk's to the senses. Your PK is a composite of (entry, order)? Won't your foreign keys elsewhere all break when you shift the order around? As Richard Broersma pointed out, the fk's are all "cascade update". I guess I could add an "order"[1] column and use the sense number as a surrogate partial key to avoid the need for key renumbering, but all the api's (and the normal human way of thinking) are based on "sense number 1 of entry x", "sense number 2 of entry y", so one would need to maintain "order" as a gapless sequence (or add a new mapping layer to map from/to a arbitrary monotonic sequence to a 1,2,3,... sequence) -- the gain doesn't seem that big. Why not do this in the straightforward way: three separate fields: a regular, sequential PK; an FK to the entry; and an order number. Add an index on (entry_key, order_number). It's a little more expensive since you have a new column and index (the PK), but in a table with a lot of plain text that's probably insignificant. Now you can use the plain PK for your FK's. I have done a few other simple database apps in the past and always did things that way. The current app also started that way but I tried, experimentally, using composite keys, and was amazed how much simpler things became. For example, to copy an entry one duplicates the entry's rows (in all the related tables) and need only change the entry id number in the duplicated rows. When each child table has a independent serial id, this becomes more complicated and requires looping in the copy function (AFAICT). I've noted that a number of other things I was doing in the database api and in Postgresql functions similarly became simpler. I'd anticipate other problems with changing your PKs. If you're referring to sense (100,3), and a concurrent user adds a sense, you may suddenly find yourself referring to a different sense due to them being reordered out from under you. You'd have similar problems if your sense is deleted entirely: instead of the row simply ceasing to exist (and resulting in predictable, checkable errors), you may end up silently referring to another sense. I try to avoid a lot of problems like that by doing very little actual updating (in the database sense). Most changes are made to a new copy of an entry. When an update is actually done (such as to delete the original copy of an entry), it is done in a very short time interval with serializable isolation. Maybe I'm misunderstanding what you're doing, though. You'd have to have no UNIQUE constraint on the (entry, order) composite index, though, or you'll have the same problem when you reorder them. Since the data in the database is of primary importance, and the app(s) secondary, I really would like as much constraint checking in the database as possible. -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Re: how can we get total records in pg server?
Hello, Here's a quick function to count the total number of rows in none system tables: DROP FUNCTION cnt_rows(); CREATE FUNCTION cnt_rows() RETURNS int4 AS ' DECLARE r RECORD; rb RECORD; tot int4; sqlstr text; BEGIN tot:=0; FOR r IN select * from pg_tables where not tablename like ''pg_%'' LOOP sqlstr:=''SELECT count(*) AS cnt FROM '' || quote_ident(r.tablename) || '';''; FOR rb IN EXECUTE sqlstr LOOP RAISE NOTICE ''Number of rows in %=%'',r.tablename,rb.cnt; tot:=tot+rb.cnt; END LOOP; END LOOP; RAISE NOTICE ''Total Number of rows for all none system tables=%'',tot; RETURN tot; END; ' LANGUAGE 'plpgsql'; You'll also need to have the plpgsql language created for your database (eg: createlang plpgsql testdb) then just execute the above script then select cnt_rows(); in psql. -Stuart ORIGINAL MESSAGE: hi all, consider the pg server with databases bhuvan uday guru the need is to get TOTAL RECORDS of all the THREE DATABASES or atleast TOTAL RECORDS of ONE DATABASE. Seems to be simple. Thankx in advance and infact i was a newbie. Regards, Bhuvaneswar. "There is nothing new under the sun, but there are lots of old things we don't know yet." -Ambrose Bierce On Mon, 23 Jul 2001, omid omoomi wrote: > you mean this ? > select count(*) from tablefoo; > > > >From: Bhuvan A <[EMAIL PROTECTED]> > >To: [EMAIL PROTECTED] > >Subject: [SQL] how can we get total records in pg server? > >Date: Mon, 23 Jul 2001 20:03:42 +0530 (IST) > > > > > >Hi all, > > > >how can we get the COUNT of total records in the db server? > > > >hope this could be simple for pg experts. > >thankx in advance! > > > >Regards, > >Bhuvaneswar. > >=== = > > Eighty percent of married men cheat in America. The rest cheat in > >Europe. > > -- Jackie Mason > >=== = > > > > > >---(end of broadcast)--- > >TIP 3: if posting/reading through Usenet, please send an appropriate > >subscribe-nomail command to [EMAIL PROTECTED] so that your > >message can get through to the mailing list cleanly > > > _ > Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] RE: Sequential select queries...??
If you want to know for each individual one wether both equal or not you could do: SELECT *,NOT ((name=description) IS NULL) AND (name=description) AS both FROM t WHERE name='bleh' OR description='bleh'; If you wanted totals of the same you could do: SELECT count(*) AS tot,NOT ((name=description) IS NULL) AND (name=description) AS both FROM t WHERE name='bleh' OR description='bleh' GROUP BY NOT ((name=description) IS NULL AND (name=description); I think this should be more efficient than UNIONs, but am not an expert on the query planner or executor. - Stuart > -Original Message- > From: Mark Mikulec [SMTP:[EMAIL PROTECTED]] > Sent: Wednesday, August 15, 2001 1:41 AM > To: [EMAIL PROTECTED] > Subject: Sequential select queries...?? > > Hello, > > At first I thought what I was trying to do was simple and could be done > easily - but alas, I've spent way too much time and could not figure out > how to get the results in question. > > Let's say I have a table T comprised of id of type integer, name and > description both of type text. > > What i'd like to do is the following: > > Select id from T where name = 'bleh'; > > and > > Select id from T where description = 'bleh'; > > and result both results in the same result set. That is, duplicate id's > if they appear. So then I could do a GROUP BY and a COUNT to see how > many appeared in only one, and how many appeared in both. > > Could someone help me? I've tried countless different sql queries, can't > seem to get one to work. If I can just get those duplicate id's in the > query.. then I'd be laughing and then I can complete my task. > > Thanks in advance, > > Mark > ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] Help On Postgresql
a) Either of these should return the yaer (2001). See section 4.7 of the
user guide
SELECT EXTRACT(YEAR FROM TIMESTAMP '2001-02-16 20:38:40');
SELECT date_part('year', TIMESTAMP '2001-02-16 20:38:40');
b) The serial datatype will do this. It is actually just an int4 with a
default value of nextval('').
- Stuart
> -Original Message-
> From: Jaydip [SMTP:[EMAIL PROTECTED]]
> Sent: Thursday, August 23, 2001 8:51 AM
> To: [EMAIL PROTECTED]
> Subject: Help On Postgresql
>
> Dear Friends
>
> I am Jaydip Dewanji working as a Sr. Programmer for Ontrack Systems Ltd.
>
> I have some questions regarding Postgresql which are listed below:
>
> a) How do I retrieve the Year,Month,Day,Hr,Minute or Second value from a
> Date/DateTime variable.
>
> For Example : i> In Oracle to_char(dt,'') - for extracting the
> year value from a date variable dt
> ii> In SqlServer datepart(hh,dt) - for extracting
> the Hour value from a date variable dt
>
> b) Can I create an identity column in Postgresql the value of which column
> will start from 1 say and keep
> on incrementing by 1 OR how can I serve my purpose in any other way.
>
>
> Please do let me know if any one of you have any solution.
>
> Regards
>
> Jaydip
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] GRANT ALL ON TO GROUP failure
You need to user is a reserved word so you need to quote it (ie: "user" rather than user) whenever you reference it.Does group reader exist? If it does then it might be a case issue as presumably you must have quoted user user to create it which also preserves case, so if you did the same for group reader and included any caps reader will need to be quoted and in the same case. - Stuart > -Original Message- > From: [EMAIL PROTECTED] [SMTP:[EMAIL PROTECTED]] > Sent: Monday, September 03, 2001 5:35 PM > To: [EMAIL PROTECTED] > Subject: GRANT ALL ON TO GROUP failure > > > Can anyone suggest, what is wrong with the following > sql file? SOmehow the semicolon causes error? > > Jari > > class=# \i pg-def-group-grant.sql > Granting ALL to ROOT > CHANGE > Granting ALL to USER > psql:pg-def-group-grant.sql:48: ERROR: parser: parse error at or near > "user" > Granting SELECT to READER > psql:pg-def-group-grant.sql:69: ERROR: non-existent group "reader" > > 01:-- Postgres create table rights > 02:-- $Id: pg-def-group-grant.sql,v 1.1 2001/08/23 19:26:16 jaalto Exp $ > 03:-- > 04:-- > 05:-- GRANT allows the creator of an object to give specific permissions > to > 06:-- all users (PUBLIC) or to a certain user or group. Users other than > the > 07:-- creator don't have any access permission unless the creator GRANTs > 08:-- permissions, after the object is created. > 09: > 10:\echo Granting ALL to ROOT > 11: > 12:GRANT ALL ON > 13: bonus > 14: , custid > 15: , customer > 16: , dept > 17: , dual > 18: , dummy > 19: , emp > 20: , item > 21: , ordid > 22: , ordx > 23: , price > 24: , prodid > 25: , product > 26: , sales > 27: , salgrade > 28: TO GROUP root; > 29: > 30:\echo Granting ALL to USER > 31: > 32:GRANT ALL ON > 33: bonus > 34: , custid > 35: , customer > 36: , dept > 37: , dual > 38: , dummy > 39: , emp > 40: , item > 41: , ordid > 42: , ordx > 43: , price > 44: , prodid > 45: , product > 46: , sales > 47: , salgrade > 48: TO GROUP user; > 49: > 50: > 51:\echo Granting SELECT to READER > 52: > 53:GRANT SELECT ON > 54: bonus > 55: , custid > 56: , customer > 57: , dept > 58: , dual > 59: , dummy > 60: , emp > 61: , item > 62: , ordid > 63: , ordx > 64: , price > 65: , prodid > 66: , product > 67: , sales > 68: , salgrade > 69: TO GROUP reader; > 70: > 71:-- End of file > > -- > > > -- > http://tiny-tools.sourceforge.net/ > Swatch @time http://www.ryanthiessen.com/swatch/resources.htm > Convert @time http://www.mir.com.my/iTime/itime.htm > ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] Text/Image, JSP tomcat. How can I operate the text and image
Have a look at lo_import and lo_export. Can handle chunks at a time Also bytea type. You have to handle complete items with this. Although it'd need some formating. I believe zeros go to \000, ' to \' and \ to \\ as a bare minimum, but am not at all sure on that. - Stuart > -Original Message- > From: Frank Zhu [SMTP:[EMAIL PROTECTED]] > Sent: Tuesday, October 16, 2001 4:42 AM > To: [EMAIL PROTECTED] > Subject: Text/Image, JSP tomcat. How can I operate the text and image > type field in Postgresql? only in java/jsp > > I want to store a long article in the Postgresql in Linux, how can I put > the > content into it and redraw it back to show? urgent. Thanks. > I use JSP. > I note that all database systems are very dull in BLOB. Why? > > Frank Zhu. > > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Deleting obsolete values
DELETE FROM partitur WHERE EXISTS (SELECT * FROM partitur AS ss_partitur
WHERE ss_partitur.userid=partitur.userid AND ss_partitur.ts>partitur.ts);
Seems like it should seems like it should delete all old values (however I
have not tested it)
- Stuart
> -Original Message-
> From: Haller Christoph [SMTP:[EMAIL PROTECTED]]
> Sent: Tuesday, October 16, 2001 5:45 PM
> To: [EMAIL PROTECTED]
> Subject: Deleting obsolete values
>
> This may look familiar to you - it was on the list last month.
> Consider the following table
> create table partitur
> (userid text, val integer, ts timestamp DEFAULT NOW() );
> Do some inserts
> insert into partitur values('Bart', 1440);
> insert into partitur values('Lisa', 1024);
> insert into partitur values('Bart', 7616);
> insert into partitur values('Lisa', 3760);
> insert into partitur values('Bart', 3760);
> insert into partitur values('Lisa', 7616);
> To retrieve the latest values (meaning the last ones inserted)
> Tom Lane wrote
> >This is what SELECT DISTINCT ON was invented for. I don't know any
> >comparably easy way to do it in standard SQL, but with DISTINCT ON
> >it's not hard:
> >SELECT DISTINCT ON (userid) userid, val, ts FROM partitur
> >ORDER BY userid, ts DESC;
>
> My question now is
> Is there a way to delete all rows the select statement did not
> bring up?
> After that *unknown* delete statement
> select userid, val, ts from partitur ;
> should show exactly the same as the SELECT DISTINCT ON (userid) ...
> did before.
>
> Regards, Christoph
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] ORDER the result of a query by date
Title: RE: [SQL] ORDER the result of a query by date andres javier garcia garcia wrote: > Hello; > I've got the result of a query that appears as: > cod_variable | cod_station | year | month | day | rain > -+---++--+ > --+-- > 30201 | 7237 | 1953 | 1 | 1 | 2 > 30201 | 7237 | 1953 | 2 | 1 | 5 > 30201 | 7237 | 1953 | 3 | 1 | 0 > 30201 | 7237 | 1953 | 4 | 1 | -3 . > > (Of course, thanks to Stephan Szabo for the method to obtain this > from my strange source data. I didn't think this was possible.) > > After have done this query I've realized that I need the data to be > ordered by date. Do you have any suggestion? > > Best regards > -- > Javier > ORDER BY year,month,day should do it. hth, - Stuart
Re: [SQL] trying to learn plpqsql... so please forgive..
Title: RE: [SQL] trying to learn plpqsql... so please forgive..
Michiel Lange wrote:
> Maybe this should be sent to novice... I was not certain, but if it
> should, please tell me so.
>
> The matter at hand is this:
>
> When I do an 'INSERT INTO VALUES '
> and on the table is a serial primary key named p_key.
> As I want this number to be auto-generated, but use it as a 'customer
> number', I want to create this function to return the value of this
> insert. I thought/hoped that this would work, but as there are some
> people dependant on this database, I dare not try out too much ;-)
> This would be the first time I used plpgsql, so I am not so certain
> about what I do.
>
> CREATE FUNCTION add_cust() RETURNS INT4 AS ' -- SERIAL data type is
> really an INT4 (and some more).
> BEGIN
> RETURN NEW.p_key;
> END;
> ' LANGUAGE 'plpgsql';
>
> CREATE TRIGGER add_cust BEFORE INSERT ON table
> FOR EACH ROW EXECUTE PROCEDURE add_cust();
>
>
> Someone willing to evaluate this for me, and telling me if it is safe
> to use as it is? or things I may do wrong?
>
> TIA,
> Michiel
>
>
Trigger functions can only return type OPAQUE which isn't seen by the client program.
To get the value of the serial field for the last insert do:
SELECT currval('TableName_SerialFieldName_seq');
This will get the last value from the sequence used by this connection (although it will error if no values have been requested).
hth,
- Stuart
Re: [SQL] help optimise this ?
Title: RE: [SQL] help optimise this ? Peter Galbavy wrote: > I have a table of image 'instances' where the columns include: > > md5 char(32), -- the 'original' image md5 key > file_md5 char(32) primary key, -- the md5 of each version of an > image image_width int, > image_length int > > I want to then find either the largest (max) or smallest (min) > version of an image that falls within some range of sizes: > > e.g. > > select file_md5 from image_instance > where image_width = > (select min(image_width) from image_instance where md5 = > '546b94e94851a56ee721f3b755f58462') > and image_length = > (select min(image_length) from image_instance where md5 = > '546b94e94851a56ee721f3b755f58462') > and md5 = '546b94e94851a56ee721f3b755f58462' > and image_width between 0 and 160 > and image_length between 0 and 160; > > Now, having to do three selects on 'md5' to limit the search seems a > little unoptimal to me. Note that the test tables are small and I > have no other indexes apart from the 'primary key' constraint yet - > this is not my primary concern at this point, I would just like > cleaner SQL. > > All I want back is (for some definition) the 'file_md5' that best > matches my min/max criteria. > > I have not - and will leave for now - the case where a cropped image > results in a scale change between width and length such that the > min/max test returns a different set of rows for each dimension. Argh. > > And help given is greatly appreciated. > > rgds, > -- > Peter > If you are willing to use pgsqlism how about: select file_md5 from image_instance WHERE md5 = '546b94e94851a56ee721f3b755f58462' AND image_width between 0 and 160 AND image_length between 0 and 160 AND ORDER BY image_width::int8*image_length::int8 LIMIT 1 This should get the smallest overall image size within your bounds. It might be faster to do ORDER BY image_width,image_length LIMIT 1 but this wouldn't necessarily give the smallest if the aspect ratio changed hth, - Stuart
Re: [SQL] How does postgres handle non literal string values
Title: RE: [SQL] How does postgres handle non literal string values
I'm guessing it would have to be more like:
<%
String insertString =
"INSERT INTO \"login\" (\'user\', \'password\')
VALUES ('"+username+"', '"+password+"')";
%>
to actually concatonate a string including the username & password variables, however I've not really used Java much so you might want to ask on the [EMAIL PROTECTED] list.
hth,
- Stuart
[EMAIL PROTECTED] wrote:
> Hi All,
>
> I have some jsp code that should insert a user name and password into
> a table called login.
>
> Instead of inserting the values given by the client, it insert the
> literal string 'username' and 'password. The problem is somewhere in
> the INSERT statement.
>
> Here is the code:
>
> <%@page contentType="text/html"%>
> <%@page import="java.io.*" %>
> <%@page import="java.sql.*" %>
> <%@page import="java.util.*" %>
>
>
> JSP login
>
>
> <%--
> class="package.class" /> --%> <%--
> name="beanInstanceName" property="propertyName" /> --%>
>
> <%
> String username = request.getParameter("username");
> String password = request.getParameter("password");
> String confirmpw = request.getParameter("password2"); String dbName =
> "storedb";
>
>
> Connection conn = null;
> Statement stmt = null;
>
>
> String usr = "postgres";
> String passwd = "Wimdk12";
>
> if (username != null)
> username = username.trim();
> if (password != null)
> password = password.trim();
> if(confirmpw != null)
> confirmpw = confirmpw.trim();
> if (username != null &&
> username.length() > 0) {
> if (password != null &&
> password.length() > 0) {
> if (confirmpw != null &&
> confirmpw.length() > 0) {
> if (password.equals(confirmpw)) {
> %>
> Loading the driver <%
> String url = "" + dbName;
>
> // Load the driver
> Class.forName("org.postgresql.Driver");
> // Connect to database
> conn = DriverManager.getConnection(url, usr,
> passwd); stmt = conn.createStatement(); %>
>
> Connecting to the data base <%
> String insertString =
> "INSERT INTO \"login\" (\'user\',
> \'password\') VALUES ('username', 'password')";
> %>
>
> Updating table
> <%
> stmt.executeUpdate(insertString);
>
> %>
> Checking result
> <%
> ResultSet rset = stmt.executeQuery("SELECT *
> FROM login");
>
>
> while (rset.next()) {
> System.out.println(
> rset.getString("user") + ":" +
> rset.getString("password"));
> } %>
> Closing connection <%
> rset.close();
> stmt.close();
> conn.close();
>
> %>
> Congratulations <%= username %>! your account has been created
>
>
> <%
> } else { %>
> Sorry! Account not created. passwords do
> not match <%
> }
> } else { %> Sorry! Account not
> created. passwords do not match
> <%
> }
> } else { %>
> Sorry! Account not created. Please enter a
> confirmation password <%
> }
> } else { %>
> Sorry! Account not created. Please enter a
> password <%
> }
> } else { %>
> Sorry! Account not created. Please enter a
> username <%
> } %>
>
>
>
Re: [SQL] Problem with timestamp
Title: RE: [SQL] Problem with timestamp Beatrice Yueksel wrote: > Hello, > > I try to check the time needed by a function. > I would like to : > . select current_timestamp, > . execute 5000 times a function > . select the current_timestamp and return the interval. > Problem : the interval is always '00:00', the two timestamps have > always the same value. > Could you help me ? have you any suggestions for testing sql functions > speed? > Thank you in advance, > Béatrice > > > create function test_function() returns interval > as ' > declare > j integer; > t1 timestamp; > t2 timestamp; > t3 interval; > x integer; > begin > select into t1 now(); > FOR j in 0..5000 loop > select into x get_function_to_test(); > end loop; > select into t2 now(); > t3 := t2 - t1; > RAISE NOTICE '' from % to % = %'',t2,t1,t3; > return t3; > > end; > ' language 'plpgsql'; > use timeofday(); as the others have the same value throughout the transaction - Stuart
Re: [SQL] sort for ranking
Title: RE: [SQL] sort for ranking
Could you do something like the following:
SELECT sum_user,(SELECT count(sum_user)+1 FROM tbl_sums AS t WHERE t.sum_user>tbl_sums.sum_user) AS ranking FROM tbl_sums ORDER BY ranking
hth,
- Stuart
P.S. Sorry about format change, the disclaimer adder forces it :(
> -Original Message-
> From: Andreas Schmitz [mailto:[EMAIL PROTECTED]]
> Sent: 07 July 2003 15:14
> To: [EMAIL PROTECTED]
> Subject: [SQL] sort for ranking
>
>
>
> Hello *,
>
> I have a little problem that confuses me. We are gathering
> values from a table
> as a sum to insert them into another table. I also need to
> get a ranking at
> insert (i.e. Highest points will get first place and so on).
> I tried ton
> invole a sequence to qualify the ranking by select at insert.
>
> So I tried the following (smaller example)
>
> select setval('tipp_eval_seq',1);
> select sum_user,nextval('tipp_eval_seq')-1 as ranking from
> tbl_sums order by
> ranking desc, user_sum asc;
>
> user_sum | ranking
> --+-
> 46 | 30
> 45 | 26
> 44 | 28
> 43 | 25
> 42 | 1
> 41 | 2
> 39 | 3
> 38 | 27
> 36 | 19
> 35 | 18
> 34 | 20
> 31 | 24
> 30 | 17
> 29 | 15
> 28 | 16
> 27 | 12
> 26 | 11
> 25 | 23
> 24 | 21
> 23 | 10
> 19 | 13
> 16 | 9
> 12 | 7
> 11 | 8
> 10 | 29
> 8 | 6
> 7 | 5
> 6 | 14
> 2 | 4
> 1 | 22
> (30 rows)
>
>
> As you can see, the sums are sorted correctly but the ranking
> is a mess. I
> recongnized that the select seems to follow primarily the
> internal table
> order. Is there any way to solve this nicely. Hints and solutions are
> appreciated.
>
> Thanks in advance
>
> -Andreas
>
>
DISCLAIMER:The information in this message is confidential and may be legally privileged. It is intended solely for the addressee. Access to this message by anyone else is unauthorised. If you are not the intended recipient, any disclosure, copying, or distribution of the message, or any action or omission taken by you in reliance on it, is prohibited and may be unlawful. Please immediately contact the sender if you have received this message in error. Thank you.
Re: [SQL] Cannot insert dup id in pk
Title: RE: [SQL] Cannot insert dup id in pk
I suspect the sequence is out of sync with the values actually in you primary key (which I gues is fid.
Try this query
SELECT setval('public.fdata _fid_seq'::text,MAX(fid)+1) FROM fdata;
This should set the value of the sequence to the current maximum+1
hth,
- Stuart
P.S. Sorry about the format change the disclaimer adder forces
> -Original Message-
> From: Scott Cain [mailto:[EMAIL PROTECTED]]
> Sent: 15 July 2003 14:00
> To: [EMAIL PROTECTED]
> Subject: [SQL] Cannot insert dup id in pk
>
>
> THIS EMAIL HAS BEEN SWEPT FOR VIRUSES BY THE NORTHCLIFFE
> GROUP MAILSWEEPER SERVER.
>
> Hello,
>
> I sent this question yesterday morning, but it was not
> allowed because I
> wasn't subscribed to the list. If it did make it through, I
> appologize
> for the dup.
>
> I am having strange behavior when I try to do an insert.
> Postgres tells
> me that it cannot insert a duplicate key into a primary key
> index, when
> I am not trying to insert into it. Can anyone shed light on
> why this is
> happening, or point out the error of my ways?
>
> Here are the details:
> wormbase=> \d fdata
> Table "public.fdata"
> Column | Type |
> Modifiers
>
> ---++-
> -- -
> fid | integer | not null default
> nextval('public.fdata _fid_seq'::text)
> fref | character varying(100) | not null default ''
> fstart | integer | not null default '0'
> fstop | integer | not null default '0'
> fbin | double precision | not null default '0.00'
> ftypeid | integer | not null default '0'
> fscore | double precision |
> fstrand | character varying(3) |
> fphase | character varying(3) |
> gid | integer | not null default '0'
> ftarget_start | integer |
> ftarget_stop | integer |
> Indexes: pk_fdata primary key btree (fid),
> fdata_fref_idx btree (fref, fbin, fstart, fstop,
> ftypeid, gid),
> fdata_ftypeid_idx btree (ftypeid),
> fdata_gid_idx btree (gid)
> Check constraints: "chk_fdata_fstrand" ((fstrand =
> '+'::character varying) OR (f strand = '-'::character varying))
> "chk_fdata_fphase" (((fphase =
> '0'::character varying) OR (fp hase = '1'::character
> varying)) OR (fphase = '2'::character varying))
>
> Now a chunk from my query log:
> Jul 14 12:48:47 localhost postgres[2998]: [107-1] LOG:
> query: INSERT INTO fdata
> (fref,fstart,fstop,fbin,ftypeid,fscore,fstrand,fphase,gid,ftar
> get_start,ftarget_stop)
> Jul 14 12:48:47 localhost postgres[2998]: [107-2]
> VALUES('XX',7327656,7327658,'1000.007327','41',NULL,'+',NULL,'
> 12358',NULL,NULL)
> Jul 14 12:48:47 localhost postgres[2998]: [108] ERROR:
> Cannot insert a duplicate key into unique index pk_fdata
>
> Note that I do not try to insert anything into fid, the primary key on
> this table. Why does Postgres think I am?
>
> Thanks much,
> Scott
>
> --
> --
> --
> Scott Cain, Ph. D.
> [EMAIL PROTECTED]
> GMOD Coordinator (http://www.gmod.org/)
> 216-392-3087
> Cold Spring Harbor Laboratory
>
>
> ---(end of
> broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faqs/FAQ.html
>
DISCLAIMER:The information in this message is confidential and may be legally privileged. It is intended solely for the addressee. Access to this message by anyone else is unauthorised. If you are not the intended recipient, any disclosure, copying, or distribution of the message, or any action or omission taken by you in reliance on it, is prohibited and may be unlawful. Please immediately contact the sender if you have received this message in error. Thank you.
