[SQL] Insert Rule, Multiple Insert, Update Problem
Hi, I am having trouble with an insert rule that also does an update. It works fine for a single insert but multiple inserts in a single statement don't behave as I expected. An outline of the problem is: A table "trans" has a "client_id" and a transaction amount called "points". Another table "clienst" also holds the "client_id" and an accumulated amount "total_points". I create a rule for the trans table: create rule trans_insert as on insert to trans do also update clients set total_points = total_points + NEW.points; where client_id = NEW.client_id; This works fine for a single insert but if I dump multiple transactions for multiple clients in the trans table only the first transaction for each client is accounted for in clients.total_points. I am just about to try and implement this as a trigger but it is driving me crazy as to why this won't work as a rule. If I put something like the following: insert into foo values(NEW.points); Then every transaction points value is copied to foo. Any help is very appreciated. Thanks, Scott. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[SQL] Granting of permissions on tables
I have PostgreSQL v7.02 running on a HP-UX 11.00 box. I as the owner of some tables granted permissions to another user. I executed the following command from psql GRANT ALL on to ; after running the command I lost permissions to the tables once I exited psql. I had to run psql as the postgres user to correct the situation. Is this a bug or desired behavior? I would imagine since I owned the tables and then granted permissions to another user, I wouldn't lose my permissions. Any help or explanation would be appreciated. Thanks, Scott Saltsgaver
RE: [SQL] Granting of permissions on tables
After I ran into this condition, the first thing I tried was to grant permissions back to myself. PostgreSQL shot me down with a permission denied error. So I had to log is as the superuser and then grant permissions to myself. Thanks for everyone's help. So would an exceptable workaround be to grant permissions to yourself first and then to all other users? Scott -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED]] Sent: Thursday, October 05, 2000 5:38 PM To: Saltsgaver, Scott Cc: '[EMAIL PROTECTED]' Subject: Re: [SQL] Granting of permissions on tables "Saltsgaver, Scott" <[EMAIL PROTECTED]> writes: > Is this a bug or desired behavior? I would imagine since I owned the tables > and then granted permissions to another user, I wouldn't lose my > permissions. It's a bug, or at least a misfeature. As long as you haven't done any explicit grants or revokes, 7.0 uses an implicit access control list that grants all privileges to the owner and none to anyone else. However, the moment you do any explicit grant/revoke, that implicit ACL entry for the owner isn't used anymore. You have to explicitly grant rights to yourself again :-(. You don't need superuser help to do this, you just have to do GRANT ALL ON table TO yourself as the table owner. But it's stupid to have to do that when it's supposed to be the default condition. Fixed for 7.1. regards, tom lane
[SQL] unwritable /tmp causes initdb to crash
this is a goofy situtation. we had a new linux 2.4.4 kernel with a /tmp not writable by postgres. granted, this is categorically hosed, but initdb still dumped core under these circumstances. changing the /tmp to writable immediately fixed the problem. -j = John Scott Senior Partner August Associates web: http://www.august.com/~jmscott email: [EMAIL PROTECTED] __ Do You Yahoo!? Yahoo! Auctions - buy the things you want at great prices http://auctions.yahoo.com/ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] Get A Tree from a table
I like to do something like this:(build a tree from relationship) I have a table like this NODES__ ParentIDVarchar(20) ParentType varchar(20) ChildID varchar(20) ChildType varchar(20) __ What in the table 'A1', 'A', 'B1', 'B' 'A2', 'A', 'B2', 'B' 'A1', 'A', 'B3', 'B' 'B1', 'B', 'C1', 'C' 'B1', 'B', 'C2', 'C' 'C1', 'C', 'D1', 'D' 'A1', 'A', 'B4', 'B' 'B1', 'B', 'C5', 'C' -- now I like to get all nodes with 'A1' as root to get the result like this 'A1', 'A', 'B1', 'B' 'B1', 'B', 'C1', 'C' 'C1', 'C', 'D1', 'D' 'B1', 'B', 'C2', 'C' 'B1', 'B', 'C5', 'C' 'A1', 'A', 'B3', 'B' 'A1', 'A', 'B4', 'B' or I just like to get another tree start with 'B1' like this 'B1', 'B', 'C1', 'C' 'C1', 'C', 'D1', 'D' 'B1', 'B', 'C2', 'C' 'B1', 'B', 'C5', 'C' How can i make it by sql , and sql functions Thanks lot and regards. Scott ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] point <-> polygon not supported?
Tomasz,
This works!
Thanks,
Scott Ding
-Original Message-
From: Tomasz Myrta [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 20, 2003 9:06 AM
To: Scott Ding
Cc: [EMAIL PROTECTED]
Subject: Re: [SQL] point <-> polygon not supported?
Scott Ding wrote:
> Tomasz,
>
> Thanks for the tip. What I want to do is something like the following:
>
> Select * from table where (table.geom <-> polygon('((3,3), (3,4),
(4,5),
> (5,4), (5,3))')) < 1.0);
>
> Table.geom is a point column.
I have never user geometric types...
I think, you can use some data type, which can be used for counting
distance, for example circle:
Select * from table where (circle(table.geom,0) <-> polygon('((3,3),
(3,4), (4,5), (5,4), (5,3))')) < 1.0);
Regards,
Tomasz Myrta
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/users-lounge/docs/faq.html
[SQL] Cannot insert dup id in pk
Hello,
I sent this question yesterday morning, but it was not allowed because I
wasn't subscribed to the list. If it did make it through, I appologize
for the dup.
I am having strange behavior when I try to do an insert. Postgres tells
me that it cannot insert a duplicate key into a primary key index, when
I am not trying to insert into it. Can anyone shed light on why this is
happening, or point out the error of my ways?
Here are the details:
wormbase=> \d fdata
Table "public.fdata"
Column | Type | Modifiers
---++---
-
fid | integer| not null default nextval('public.fdata
_fid_seq'::text)
fref | character varying(100) | not null default ''
fstart| integer| not null default '0'
fstop | integer| not null default '0'
fbin | double precision | not null default '0.00'
ftypeid | integer| not null default '0'
fscore| double precision |
fstrand | character varying(3) |
fphase| character varying(3) |
gid | integer| not null default '0'
ftarget_start | integer|
ftarget_stop | integer|
Indexes: pk_fdata primary key btree (fid),
fdata_fref_idx btree (fref, fbin, fstart, fstop, ftypeid, gid),
fdata_ftypeid_idx btree (ftypeid),
fdata_gid_idx btree (gid)
Check constraints: "chk_fdata_fstrand" ((fstrand = '+'::character varying) OR (f
strand = '-'::character varying))
"chk_fdata_fphase" (((fphase = '0'::character varying) OR (fp hase
= '1'::character varying)) OR (fphase = '2'::character varying))
Now a chunk from my query log:
Jul 14 12:48:47 localhost postgres[2998]: [107-1] LOG: query: INSERT INTO fdata
(fref,fstart,fstop,fbin,ftypeid,fscore,fstrand,fphase,gid,ftarget_start,ftarget_stop)
Jul 14 12:48:47 localhost postgres[2998]: [107-2]
VALUES('XX',7327656,7327658,'1000.007327','41',NULL,'+',NULL,'12358',NULL,NULL)
Jul 14 12:48:47 localhost postgres[2998]: [108] ERROR: Cannot insert a duplicate key
into unique index pk_fdata
Note that I do not try to insert anything into fid, the primary key on
this table. Why does Postgres think I am?
Thanks much,
Scott
--
Scott Cain, Ph. D. [EMAIL PROTECTED]
GMOD Coordinator (http://www.gmod.org/) 216-392-3087
Cold Spring Harbor Laboratory
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] Cannot insert dup id in pk
On Tue, 2003-07-15 at 10:43, Dmitry Tkach wrote: > You must have your sequence out of date with the content of the table > (so that the next value in the sequence has already been inserted). > One way to get into a situation like that is loading the table data with > COPY (the input contains the pks, and the COPY command does not update > the sequence, you have to do that manually after the copy is done). Yes, this is exactly what happened. I had no idea that copy didn't update the sequence. I suspect I've got users who are being bitten by this and don't realize it. I'll have to change my "bulk loading" script to update the sequence after the load is done. Thanks much, Scott -- ---- Scott Cain, Ph. D. [EMAIL PROTECTED] GMOD Coordinator (http://www.gmod.org/) 216-392-3087 Cold Spring Harbor Laboratory ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] OR vs UNION
Hello, I have a query that uses a series of ORs and I have heard that sometimes this type of query can be rewritten to use UNION instead and be more efficient. Are there any rules of thumb for when this might be the case? As an example here is a query of the type I am discussing: select distinct f.name,fl.fmin,fl.fmax,fl.strand,f.type_id,f.feature_id from feature f, featureloc fl where (f.type_id = 219 OR f.type_id = 368 OR f.type_id = 514 OR f.type_id = 475 OR f.type_id = 426 OR f.type_id = 456 OR f.type_id = 461 OR f.type_id = 553 OR f.type_id = 89) and fl.srcfeature_id = 1 and f.feature_id = fl.feature_id and fl.fmin <= 2491413 and fl.fmax >= 2485521 which could easily be rewritten as a set of select statements for each type_id and then union them together. For this particular query, explain analyze indicates that this is the more efficient form, but I could easily see that at other times/for other parameters, a set unioned together would be better. Are there any guidelines for this? Thanks, Scott -- ---- Scott Cain, Ph. D. [EMAIL PROTECTED] GMOD Coordinator (http://www.gmod.org/) 216-392-3087 Cold Spring Harbor Laboratory ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] OR vs UNION
On Thu, 2003-07-17 at 15:00, Josh Berkus wrote: > Scott, > > > I have a query that uses a series of ORs and I have heard that sometimes > > this type of query can be rewritten to use UNION instead and be more > > efficient. > > I'd be interested to know where you heard that; as far as I know, it could > only apply to conditional left outer joins. Hmm, don't know for sure where I heard it, however I do know from previous experience that unioned queries worked better in a somewhat similar query, though it was a different schema, so it is hard to compare directly. One way in which I thought it might make a difference is if I build partial indexes on feature_id for each of the type_ids of interest (there are several thousand in the database, but only 15 or 20 that I am interested in querying). That way, when I write the separate queries for each type_id, the query planner would have access to the partial indexes for each type, and therefore may be able to complete the individual queries very quickly. > > > select distinct f.name,fl.fmin,fl.fmax,fl.strand,f.type_id,f.feature_id > > from feature f, featureloc fl > > where > >(f.type_id = 219 OR > > f.type_id = 368 OR > > f.type_id = 514 OR > > f.type_id = 475 OR > > f.type_id = 426 OR > > f.type_id = 456 OR > > f.type_id = 461 OR > > f.type_id = 553 OR > > f.type_id = 89) and > > fl.srcfeature_id = 1 and > > f.feature_id = fl.feature_id and > > fl.fmin <= 2491413 and fl.fmax >= 2485521 > > Certainly a query of the above form would not benefit from being a union. > > For readability, you could use an IN() statement rather than a bunch of ORs > ... this would not help performance, but would make your query easier to > type/read. -- Scott Cain, Ph. D. [EMAIL PROTECTED] GMOD Coordinator (http://www.gmod.org/) 216-392-3087 Cold Spring Harbor Laboratory ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[SQL] Cannot insert dup id in pk
Hello,
I am having strange behavior when I try to do an insert. Postgres tells
me that it cannot insert a duplicate key into a primary key index, when
I am not trying to insert into it. Can anyone shed light on why this is
happen, or point out the error of my ways?
Here are the details:
wormbase=> \d fdata
Table "public.fdata"
Column | Type | Modifiers
---++---
-
fid | integer| not null default nextval('public.fdata
_fid_seq'::text)
fref | character varying(100) | not null default ''
fstart| integer| not null default '0'
fstop | integer| not null default '0'
fbin | double precision | not null default '0.00'
ftypeid | integer| not null default '0'
fscore| double precision |
fstrand | character varying(3) |
fphase| character varying(3) |
gid | integer| not null default '0'
ftarget_start | integer|
ftarget_stop | integer|
Indexes: pk_fdata primary key btree (fid),
fdata_fref_idx btree (fref, fbin, fstart, fstop, ftypeid, gid),
fdata_ftypeid_idx btree (ftypeid),
fdata_gid_idx btree (gid)
Check constraints: "chk_fdata_fstrand" ((fstrand = '+'::character varying) OR (f
strand = '-'::character varying))
"chk_fdata_fphase" (((fphase = '0'::character varying) OR (fp hase
= '1'::character varying)) OR (fphase = '2'::character varying))
Now a chunk from my query log:
Jul 14 12:48:47 localhost postgres[2998]: [107-1] LOG: query: INSERT INTO fdata
(fref,fstart,fstop,fbin,ftypeid,fscore,fstrand,fphase,gid,ftarget_start,ftarget_stop)
Jul 14 12:48:47 localhost postgres[2998]: [107-2]
VALUES('XX',7327656,7327658,'1000.007327','41',NULL,'+',NULL,'12358',NULL,NULL)
Jul 14 12:48:47 localhost postgres[2998]: [108] ERROR: Cannot insert a duplicate key
into unique index pk_fdata
Note that I do not try to insert anything into fid, the primary key on
this table. Why does Postgres think I am?
Thanks much,
Scott
--
Scott Cain, Ph. D. [EMAIL PROTECTED]
GMOD Coordinator (http://www.gmod.org/) 216-392-3087
Cold Spring Harbor Laboratory
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] [PERFORM] EXTERNAL storage and substring on long strings
Hello, Note: there is a SQL question way at the bottom of this narrative :-) Last week I asked about doing substring operations on very long strings (>10 million characters). I was given a suggestion to use EXTERNAL storage on the column via the ALTER TABLE ... SET STORAGE command. In one test case, the performance of substring actually got worse using EXTERNAL storage. In an effort to find the best way to do this operation, I decided to look at what is my "worst case" scenario: the DNA sequence for human chromosome 1, which is about 250 million characters long (previous strings where about 20 million characters long). I wrote a perl script to do several substring operations over this very long string, with substring lengths varying between 1000 and 40,000 characters spread out over various locations along the string. While EXTENDED storage won in this case, it was a hollow victory: 38 seconds per operation versus 40 seconds, both of which are way too long to for an interactive application. Time for a new method. A suggestion from my boss was to "shred" the DNA into smallish chunks and a column giving offsets from the beginning of the string, so that it can be reassembled when needed. Here is the test table: string=> \d dna Table "public.dna" Column | Type | Modifiers -+-+--- foffset | integer | pdna| text| Indexes: foffset_idx btree (foffset) In practice, there would also be a foreign key column to give the identifier of the dna. Then I wrote the following function (here's the SQL part promised above): CREATE OR REPLACE FUNCTION dna_string (integer, integer) RETURNS TEXT AS ' DECLARE smin ALIAS FOR $1; smax ALIAS FOR $2; longdna TEXT := ''''; dna_row dna%ROWTYPE; dnastring TEXT; firstchunk INTEGER; lastchunk INTEGER; in_longdnastart INTEGER; in_longdnalen INTEGER; chunksize INTEGER; BEGIN SELECT INTO chunksize min(foffset) FROM dna WHERE foffset>0; firstchunk := chunksize*(smin/chunksize); lastchunk := chunksize*(smax/chunksize); in_longdnastart := smin % chunksize; in_longdnalen := smax - smin + 1; FOR dna_row IN SELECT * FROM dna WHERE foffset >= firstchunk AND foffset <= lastchunk ORDER BY foffset LOOP longdna := longdna || dna_row.pdna; END LOOP; dnastring := substring(longdna FROM in_longdnastart FOR in_longdnalen); RETURN dnastring; END; ' LANGUAGE 'plpgsql'; So here's the question: I've never written a plpgsql function before, so I don't have much experience with it; is there anything obviously wrong with this function, or are there things that could be done better? At least this appears to work and is much faster, completing substring operations like above in about 0.27 secs (that's about two orders of magnitude improvement!) Thanks, Scott -- Scott Cain, Ph. D. [EMAIL PROTECTED] GMOD Coordinator (http://www.gmod.org/) 216-392-3087 Cold Spring Harbor Laboratory ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] [PERFORM] EXTERNAL storage and substring on long strings
On Mon, 2003-08-04 at 11:55, Richard Huxton wrote: > On Monday 04 August 2003 16:25, Scott Cain wrote: > [snip] > > [snip] > > You might want some checks to make sure that smin < smax, otherwise looks like > it does the job in a good clean fashion. Good point--smin < smax generally by virtue of the application using the database, but I shouldn't assume that will always be the case. > > Glad to hear it's going to solve your problems. Two things you might want to > bear in mind: > 1. There's probably a "sweet spot" where the chunk size interacts well with > your data, usage patterns and PGs backend to give you peak performance. > You'll have to test. Yes, I had a feeling that was probably the case-- since this is an open source project, I will need to write directions for installers on picking a reasonable chunk size. > 2. If you want to search for a sequence you'll need to deal with the case > where it starts in one chunk and ends in another. I forgot about searching--I suspect that application is why I faced opposition for shredding in my schema development group. Maybe I should push that off to the file system and use grep (or BLAST). Otherwise, I could write a function that would search the chunks first, then after failing to find the substring in those, I could start sewing the chunks together to look for the query string. That could get ugly (and slow--but if the user knows that and expects it to be slow, I'm ok with that). Thanks, Scott -- Scott Cain, Ph. D. [EMAIL PROTECTED] GMOD Coordinator (http://www.gmod.org/) 216-392-3087 Cold Spring Harbor Laboratory ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] [PERFORM] EXTERNAL storage and substring on long strings
On Mon, 2003-08-04 at 11:53, Tom Lane wrote: > Scott Cain <[EMAIL PROTECTED]> writes: > > At least this appears to work and is much faster, completing substring > > operations like above in about 0.27 secs (that's about two orders of > > magnitude improvement!) > > I find it really, really hard to believe that a crude reimplementation > in plpgsql of the TOAST concept could beat the built-in implementation > at all, let alone beat it by two orders of magnitude. > > Either there's something unrealistic about your testing of the > dna_string function, or your original tests are not causing TOAST to be > invoked in the expected way, or there's a bug we need to fix. I'd > really like to see some profiling of the poor-performing > external-storage case, so we can figure out what's going on. > I was really hoping for a "Good job and glad to hear it" from you :-) I don't think there is anything unrealistic about my function or its testing, as it is very much along the lines of the types of things we do now. I will really try to do some profiling this week to help figure out what is going on. Scott -- Scott Cain, Ph. D. [EMAIL PROTECTED] GMOD Coordinator (http://www.gmod.org/) 216-392-3087 Cold Spring Harbor Laboratory ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] [PERFORM] EXTERNAL storage and substring on long strings
Joe, Good idea, since I may not get around to profiling it this week. I created a dump of the data set I was working with. It is available at http://www.gmod.org/string_dump.bz2 Thanks, Scott On Mon, 2003-08-04 at 16:29, Joe Conway wrote: > Is there a sample table schema and dataset available (external-storage > case) that we can play with? > > Joe -- ---- Scott Cain, Ph. D. [EMAIL PROTECTED] GMOD Coordinator (http://www.gmod.org/) 216-392-3087 Cold Spring Harbor Laboratory ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] [PERFORM] EXTERNAL storage and substring on long strings
Oh, and I forgot to mention: it's highly compressed (bzip2 -9) and is 109M. Scott On Tue, 2003-08-05 at 11:01, Scott Cain wrote: > Joe, > > Good idea, since I may not get around to profiling it this week. I > created a dump of the data set I was working with. It is available at > http://www.gmod.org/string_dump.bz2 > > Thanks, > Scott > > > On Mon, 2003-08-04 at 16:29, Joe Conway wrote: > > Is there a sample table schema and dataset available (external-storage > > case) that we can play with? > > > > Joe -- ---- Scott Cain, Ph. D. [EMAIL PROTECTED] GMOD Coordinator (http://www.gmod.org/) 216-392-3087 Cold Spring Harbor Laboratory ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] SQL function to validate money input
Title: Message Hi All, Is there a function available to validate if the input is a correct money format? For exapmle, "23.56" is a correct number but "23.567" is incorrect. I just like to do this in postgres level not in my application code. Thanks Scott
[SQL] relation X does not exist
Morning, I see lots of references to this error, but currently no fix for my situation. I am converting an INFORMIX ESQL/C system to PostgreSQL embedded C. I have hit several road blocks along the way and running out of time to complete and test this convert - so I am coming to the pros for some help. I have a table defined with only two fields. The create table statement did not have " " around the table name. I can cut and paste the query from my debug statement into psql and get the desired result set back. However, in the .pgc file, the query no matter how I write it (with " " , without, uppercase) I get the relation X does not exist. The embedded query is EXEC SQL select count(*) into :var1 from measurement_type where meas_type = :val2. Both var1 and var2 are within the EXEC SQL DECLARE section (host variables) and valid types. I can see the relation measurement_type within psql when I do \dt and as I stated, putting the exact query into psql gives me my count back. Any ideas? Thanks - Laura ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[SQL] Aggregate query for multiple records
Hello, I am new to the list, my apology if this question is beyond the scope or charter of this list. My questions is: What is the best method to perform an aggregate query to calculate sum() values for each distinct wid as in the example below, but except for all wid's (not just WHERE wid='01/1-6-1-30w1/0'). Also, performance wise, would it be better to build a function for this query. The table has 9 million records and these aggregate queries take hours. SELECT SUM(oil) as sumoil, SUM(hours) as sumhours, FROM (SELECT oil, hours prd_data WHERE wid='01/1-6-1-30w1/0' ORDER BY date LIMIT 6) subtable ; Table description: Table "prd_data" Column | Type | Modifiers +---+--- date | integer | hours | real | oil| real | gas| real | water | real | pwid | integer | wid| character varying(20) | year | smallint | Indexes: wid_index6 Actual table (prd_data), 9 million records: date | hours | oil | gas | water | pwid | wid | year +---+---+--+---+--+-+-- 196507 | 360 | 159.4 | 11.3 | 40.9 | 413 | 01/1-1-1-31w1/0 | 1965 196508 | 744 | 280 | 20 | 27.2 | 413 | 01/1-1-1-31w1/0 | 1965 196509 | 360 | 171.1 | 11.4 | 50.4 | 413 | 01/1-1-1-31w1/0 | 1965 196510 | 744 | 202.1 | 25 | 89.8 | 413 | 01/1-1-1-31w1/0 | 1965 196512 | 744 | 201.3 | 23.8 | 71.9 | 413 | 01/1-1-1-31w1/0 | 1965 196511 | 720 | 184 | 17.6 | 78.9 | 413 | 01/1-1-1-31w1/0 | 1965 196610 | 744 | 99.8 | 15.4 | 53.7 | 413 | 01/1-1-1-31w1/0 | 1966 196612 | 744 |86 | 12.8 | 36.1 | 413 | 01/1-1-1-31w1/0 | 1966 196611 | 720 |86 | 12.6 | 41.7 | 413 | 01/1-1-1-31w1/0 | 1966 196601 | 744 | 191.6 | 22.6 | 50.7 | 413 | 01/1-1-1-31w1/0 | 1966 200301 | 461 | 68.8 | 0 | 186.3 | 47899 | 9G/6-1-50-24w3/0 | 2003 200310 | 740 | 446.3 | 0 | 563.1 | 47899 | 9G/6-1-50-24w3/0 | 2003 200306 | 667 | 92.1 | 0 | 968.8 | 47899 | 9G/6-1-50-24w3/0 | 2003 200304 | 0 | 0 | 0 | 0 | 47899 | 9G/6-1-50-24w3/0 | 2003 200308 | 457 | 100.7 | 0 | 82.8 | 47899 | 9G/6-1-50-24w3/0 | 2003 200307 | 574 |78 | 0 | 752 | 47899 | 9G/6-1-50-24w3/0 | 2003 200312 | 582 | 360.9 | 0 | 569 | 47899 | 9G/6-1-50-24w3/0 | 2003 200311 | 681 | 260.8 | 0 | 563.9 | 47899 | 9G/6-1-50-24w3/0 | 2003 200305 | 452 | 0 | 0 | 0 | 47899 | 9G/6-1-50-24w3/0 | 2003 200309 | 637 | 244.6 | 0 | 193.8 | 47899 | 9G/6-1-50-24w3/0 | 2003 (20 rows) Thanks, -- Scott A. Gerhardt, P.Geo. Gerhardt Information Technologies ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] Aggregate query for multiple records
36.1 | 413 | 01/1-1-1-31w1/0 | 1966 196611 | 720 |86 | 12.6 | 41.7 | 413 | 01/1-1-1-31w1/0 | 1966 196601 | 744 | 191.6 | 22.6 | 50.7 | 413 | 01/1-1-1-31w1/0 | 1966 200301 | 461 | 68.8 | 0 | 186.3 | 47899 | 9G/6-1-50-24w3/0 | 2003 200310 | 740 | 446.3 | 0 | 563.1 | 47899 | 9G/6-1-50-24w3/0 | 2003 200306 | 667 | 92.1 | 0 | 968.8 | 47899 | 9G/6-1-50-24w3/0 | 2003 200304 | 0 | 0 | 0 | 0 | 47899 | 9G/6-1-50-24w3/0 | 2003 200308 | 457 | 100.7 | 0 | 82.8 | 47899 | 9G/6-1-50-24w3/0 | 2003 200307 | 574 |78 | 0 | 752 | 47899 | 9G/6-1-50-24w3/0 | 2003 200312 | 582 | 360.9 | 0 | 569 | 47899 | 9G/6-1-50-24w3/0 | 2003 200311 | 681 | 260.8 | 0 | 563.9 | 47899 | 9G/6-1-50-24w3/0 | 2003 200305 | 452 | 0 | 0 | 0 | 47899 | 9G/6-1-50-24w3/0 | 2003 200309 | 637 | 244.6 | 0 | 193.8 | 47899 | 9G/6-1-50-24w3/0 | 2003 Thanks, -- Scott A. Gerhardt, P.Geo. Gerhardt Information Technologies ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] [GENERAL] SQL update function faililed in Webmin Interface
On Wed, 2004-10-20 at 01:03, Kathiravan Velusamy wrote:
> Hello All,
>I am a newbie to PostgreSQL. I am using postgreSQL 7.4.5 in
> HP-Unix 11.11 PA , and 11.23 PA.
> I have a problem with postgreSQL Webmin (Webmin Version
> 1.070) testing in update function.
> This problem exists only when i create a new data base
> through webmin interface,
> and insert some values,and modify those values afterwards.
>
> For E.g :
> I created database called "test" and created table name called
> "one" for that DB,
> which contains filed name "Name" with varchar(10) as a type and
> allows Null values.
>I inserted values for two rows as first and second, it can be
> viewed perfectly.
> But when i select second row (Which contains string "second" as value)
> to edit,
> and change it value as "second1" instead of "second", it throws me
> following error when i saved it :
>
> ~~~
>
> "SQL update "one" set Name = 'Second1' where oid = 25349 failed :
> column "name" of relation "one" does not exist".
> ~~~
>
> But when i created the database without using this Webmin interface
> ($ echo "create table one(Name varchar(10))"|psql test ),
> and then edit with webmin interface means it works well and get
> updated.
> Is there any problem with postgreSQL or with Webmin interface ?
> Any idea to solve this issue ?
>
It looks like the table is being defined with the column quoted, like
this:
create table one ("Name" text, moredefshere...)
but accessed without quotes, like above. Whether a database folds to
upper or lower case, the columns need to be accessed consistenly, either
all quoted or never quoted. An application that mixes quoting and not
quoting identifiers is going to have problems.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Re: [SQL] [GENERAL] Log
On Fri, 2004-10-22 at 07:52, Davide Negri wrote: > Hello, > > i have installed the 8.0 beta3-dev1 version of postgres on my pc, and > I want to disable log. How can I do? It is possible to disable and > donât write all the log, or how can I write all the log in a specific > directory?? If you use pg_ctl to start postgresql, then edit the $PGDATA/postgresql.conf file and set #silent_mode = false to be silent_mode = true and restart your database. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[SQL] JOIN not being calculated correctly
Hi all!
I'm having a problem with Postgresql 8.0.0-beta4 on a Win2K machine and a
particular join.
I've got two tables - a list of customers and jobs they've had. A customer
can have multiple jobs.
The query always scans the entire jobs table for each customer - I need it
the other way around so I can get a list of the customers who have at least
one job.
The EXPLAIN shows the jobs table is being scanned for some reason:
Quarry=# EXPLAIN SELECT Customers.CustomerId, Jobs.JobId FROM Customers
INNER JOIN Jobs USING (CustomerId);
QUERY PLAN
-
Hash Join (cost=78.54..4908.71 rows=70727 width=8)
Hash Cond: ("outer".customerid = "inner".customerid)
-> Seq Scan on jobs (cost=0.00..3769.27 rows=70727 width=8)
-> Hash (cost=76.03..76.03 rows=1003 width=4)
-> Seq Scan on customers (cost=0.00..76.03 rows=1003 width=4)
(5 rows)
Even if I reverse the JOIN I get the exact same result:
Quarry=# EXPLAIN SELECT Customers.CustomerId, Jobs.JobId FROM Jobs INNER
JOIN Customers USING (CustomerId);
QUERY PLAN
-
Hash Join (cost=78.54..4908.71 rows=70727 width=8)
Hash Cond: ("outer".customerid = "inner".customerid)
-> Seq Scan on jobs (cost=0.00..3769.27 rows=70727 width=8)
-> Hash (cost=76.03..76.03 rows=1003 width=4)
-> Seq Scan on customers (cost=0.00..76.03 rows=1003 width=4)
(5 rows)
How can I force it to operate as I need it to? It seems the query engine is
a little smarter than it needs to be.
If anyone can shed some light on this problem, it would be greatly
appreciated. I've taken it as far as I can and don't really know where to
move from here.
Thanks in advance,
Scott Pederick
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
Re: [SQL] JOIN not being calculated correctly
bpchar NOT NULL,
invoice numeric(10,2) DEFAULT 0 NOT NULL,
drivercomment character varying(12) DEFAULT ''::character varying NOT
NULL
text text DEFAULT ''::text NOT NULL,
gst numeric(10,2) DEFAULT 0 NOT NULL,
distance numeric(7,2) DEFAULT 0 NOT NULL,
productdefinitionid integer,
customerprice numeric(10,2) DEFAULT 0 NOT NULL
);
Jobs Keys/Indexes
---
jobs_pkey PRIMARY KEY (jobid);
INDEX jobs_customerid ON jobs USING btree (customerid);
INDEX jobs_jobdate ON jobs USING btree (jobdate);
INDEX jobs_quarryid ON jobs USING btree (quarryid);
CONSTRAINT jobs_customerid_fkey FOREIGN KEY (customerid) REFERENCES
customers(customerid) ON UPDATE RESTRICT ON DELETE RESTRICT;
CONSTRAINT jobs_invoiceid_fkey FOREIGN KEY (invoiceid) REFERENCES
invoices(invoiceid) ON UPDATE RESTRICT ON DELETE RESTRICT;
ADD CONSTRAINT jobs_productdefinitionid_fkey FOREIGN KEY
(productdefinitionid) REFERENCES producttypes(producttypeid) ON UPDATE
RESTRICT;
CONSTRAINT jobs_producttypeid_fkey FOREIGN KEY (producttypeid) REFERENCES
producttypes(producttypeid) ON UPDATE RESTRICT ON DELETE RESTRICT;
CONSTRAINT jobs_quarryid_fkey FOREIGN KEY (quarryid) REFERENCES
quarries(quarryid) ON UPDATE RESTRICT ON DELETE RESTRICT;
CONSTRAINT jobs_salesid_fkey FOREIGN KEY (salesid) REFERENCES sales(salesid)
ON UPDATE RESTRICT ON DELETE RESTRICT;
: 2) Have you vacuumed / analyzed the tables involved recently?
Yes, I ran "VACUUM ANALYZE" on both table before I started.
: 3) Have you modified the stats on any of the tables / columns involve or
: are you using defaults?
I'm using the defaults...
Thanks in advance to anyone who has taken the time to wade through this
one... :)
Scott
:
: Drew
:
:
: Scott Pederick wrote:
: | Hi all!
: |
: | I'm having a problem with Postgresql 8.0.0-beta4 on a Win2K machine and
a
: | particular join.
: |
: | I've got two tables - a list of customers and jobs they've had. A
customer
: | can have multiple jobs.
: |
: | The query always scans the entire jobs table for each customer - I need
it
: | the other way around so I can get a list of the customers who have at
: least
: | one job.
: |
: | The EXPLAIN shows the jobs table is being scanned for some reason:
: |
: | Quarry=# EXPLAIN SELECT Customers.CustomerId, Jobs.JobId FROM Customers
: | INNER JOIN Jobs USING (CustomerId);
: |QUERY PLAN
:
| -
: | Hash Join (cost=78.54..4908.71 rows=70727 width=8)
: |Hash Cond: ("outer".customerid = "inner".customerid)
: |-> Seq Scan on jobs (cost=0.00..3769.27 rows=70727 width=8)
: |-> Hash (cost=76.03..76.03 rows=1003 width=4)
: | -> Seq Scan on customers (cost=0.00..76.03 rows=1003 width=4)
: | (5 rows)
: |
: |
: |
: | Even if I reverse the JOIN I get the exact same result:
: |
: | Quarry=# EXPLAIN SELECT Customers.CustomerId, Jobs.JobId FROM Jobs INNER
: | JOIN Customers USING (CustomerId);
: |QUERY PLAN
:
| -
: | Hash Join (cost=78.54..4908.71 rows=70727 width=8)
: |Hash Cond: ("outer".customerid = "inner".customerid)
: |-> Seq Scan on jobs (cost=0.00..3769.27 rows=70727 width=8)
: |-> Hash (cost=76.03..76.03 rows=1003 width=4)
: | -> Seq Scan on customers (cost=0.00..76.03 rows=1003 width=4)
: | (5 rows)
: |
: |
: | How can I force it to operate as I need it to? It seems the query
: engine is
: | a little smarter than it needs to be.
: |
: | If anyone can shed some light on this problem, it would be greatly
: | appreciated. I've taken it as far as I can and don't really know where
to
: | move from here.
:
:
: - --
: Andrew Hammond416-673-4138[EMAIL PROTECTED]
: Database Administrator, Afilias Canada Corp.
: CB83 2838 4B67 D40F D086 3568 81FC E7E5 27AF 4A9A
: -BEGIN PGP SIGNATURE-
: Version: GnuPG v1.2.5 (GNU/Linux)
:
: iD8DBQFBh9YQgfzn5SevSpoRAg0LAKCg5K7IccFIOvdTc8DEl2YaUMcUCgCfRt2Q
: CI1Vo6yxHkrWcoTQMQ/EvOw=
: =m15B
: -END PGP SIGNATURE-
:
: ---(end of broadcast)---
: TIP 3: if posting/reading through Usenet, please send an appropriate
: subscribe-nomail command to [EMAIL PROTECTED] so that your
: message can get through to the mailing list cleanly
:
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] Column with recycled sequence value
On Thu, 2005-01-13 at 11:08, KÃPFERL Robert wrote: > Hi, > > suppose I have a let's say heavy used table. There's a column containing > UNIQUE in4 > values. The data type musn't exceed 32-Bit. Since however the table is heavy > used 2^32 will be reached soon and then? There are far less than 4G-records > saved thus these values may be reused. How can this be accomplished? > > I already defined a sequence but then? Redefine it as a bigserial? ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Column with recycled sequence value
On Thu, 2005-01-13 at 15:19, Michael Fuhr wrote: > On Thu, Jan 13, 2005 at 02:48:47PM -0600, Scott Marlowe wrote: > > On Thu, 2005-01-13 at 11:08, KÃPFERL Robert wrote: > > > > suppose I have a let's say heavy used table. There's a column containing > > > UNIQUE in4 > > > values. The data type musn't exceed 32-Bit. Since however the table is > > > heavy > > > used 2^32 will be reached soon and then? There are far less than > > > 4G-records > > > saved thus these values may be reused. How can this be accomplished? > > > > > > I already defined a sequence but then? > > > > Redefine it as a bigserial? > > That would violate the 32-bit requirement since bigserial would > make the field a bigint (64 bits). Or am I missing something? I wasn't sure if that was a limitation he was facing due to business rules or if he was referring to the limit in postgresql. Any method that tries to reuse sequence numbers is a bad idea (TM) and to be avoided, so my point in asking was if the user didn't know about bigserial / bigint. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Column with recycled sequence value
On Thu, 2005-01-13 at 15:43, Andrew Sullivan wrote: > On Thu, Jan 13, 2005 at 03:31:54PM -0600, Scott Marlowe wrote: > > Any method that tries to reuse sequence numbers is a bad idea (TM) and > > Why? I can think of a dozen cases where it can be useful. It just > depends on the application. The usual, if it's a PK of a FK relationship it's possible to have an orphaned set at some point pointing back to it, race conditions on trying to find a reusable sequence number, and the poor performance needed to lock it to reuse it. What cases are you thinking of? I've seen some very limited ones, like using a short range for managing a queue, but that avoids a lot of the performance issues since it's a small set you're trundling through to find the next one available. But they're not that common, and most of the time someone is thinking of doing so it's because a boss who doesn't get it wants a pretty list with no holes in the sequence or something equally silly. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] number os commands inside transaction block
On Mon, 2005-01-31 at 16:29, Luiz Rafael Culik Guimaraes wrote: > Hi Michael Fuhr > >> how i can increse the number of commands in an transaction block > > > > What do you mean? What problem are you trying to solve? > > iÂm trying to solve the follow message > current transaction is aborted, queries ignored until end of transaction > block > > some one tell me this is defined inside postgres sources > i recive this message when i execute an certain number of queries inside an > begin/commit block This is normal postgresql behaviour, and can't really be changed. However, with the advent of savepoints in 8.0, it is now possible to detect such errors and roll back so you can then continue. Without using savepoints with rollback to the given save point, however, there's not much way to get around it. It sounds to me like you're getting an error somewhere in your scripts you need to look into. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] MSSQL versus Postgres timing
On Tue, 2005-02-01 at 10:54, Joel Fradkin wrote: > All is moving along well. > > I have all my views and data and am testing things out a bit. > > A table with 645,000 records for associates has view (basically select > * from tblassociates where clientnum = âtestâ) What does explain analyze select * from tblassociates where clientnum = âtestâ say? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] getting back autonumber just inserted
On Thu, 2005-02-03 at 16:16, lorid wrote:
> I could have sworn I kept a copy of prior emails that discussed how to
> get back a value that was just inserted into a autonumber (or in
> postgresql case a sequence number)
If you know the name of the sequence the number came from you can use
currval():
insert into table1 (info) values ('abc');
select currval('table1seq');
Assuming table1seq is the name of the sequence here.
In 8.0 there's a function to do this (I'm not sure of the name, but a
quick search of the 8.0 docs should turn it up.)
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
Re: [SQL] More efficient OR
On Wed, 2005-02-16 at 10:02, Keith Worthington wrote:
> Hi All,
>
> In several of my SQL statements I have to use a WHERE clause that contains
> mutiple ORs. i.e.
>
> WHERE column1 = 'A' OR
> column1 = 'B' OR
> column1 = 'C'
>
> Is there a more efficient SQL statement that accomplishes the same limiting
> functionality?
The in() construct is (nowadays) basically the same as ORing multiple
columns;
where column1 in ('A','B','C')
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Re: [SQL] Postgres performance
On Tue, 2005-03-01 at 04:52, mauro wrote: > > Not always, AFAICT. The four most common reasons why PG tests slower > > than Mysql are: > > 1. You haven't configured or have misconfigured PostgreSQL. > > 2. You are testing a MySQL-tuned application (lots of small, simple > > queries, no views, no subselects etc) > > 3. You are only testing one connection (try 10,20,50 simultaneous users > > and see who wins then). > > 4. You are not testing the transaction-safe storage systems in MySQL > > > > See if you can answer some of the questions above and I'm sure we'll be > > able to get your database server running smoothly. > Hi, > I've used the benchmark > http://genecensus.org/yeast/cluster/database/mysql/sql-bench/ (ok, > it's without bench on views, sub-select, transaction,..) > The database files are in stripe (RAID 0) on two SATA hd (transfer > rate 50Mb), the filesystem is reiserfs (3.6 format - with checks off), > no optimation on I/O scheduler, DBMS are in default configuration (so > I don't benefit nobody). Total time: > Pgsql: 7h 20' > MySQL: 14' (!!) Why is a dragster faster than a freight train? Because it only has to run for 5 or 6 seconds and you expect the engine to implode on ever fourth run. The freight train, on the other hand, has to run day after day and deliver its cargo without damage. The reason MySQL can be so fast is that it's not really a database in the classical sense. It does floating point maths on exact numeric types. It does almost no error checking, and if you lose power during updates all your data could quite easily be gone. While it's a fine storage system for certain content management tasks, it's not reliable enough for things like accounting or where the answers have to be right. The reason PostgreSQL is slower is because it (and by extension the team behind it) cares about your data. Here's a list of the things MySQL will gladly do wrong: http://sql-info.de/mysql/gotchas.html I wouldn't trust such a database for an kind of mission critical system that handled important data, and anyone who does is gambling against the house odds. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] Postgres performance
On Wed, 2005-03-02 at 15:45, PFC wrote:
> > The reason PostgreSQL is slower is because it (and by extension the team
> > behind it) cares about your data.
>
> Sure, postgres is (a bit but not much) slower for a simple query like
> SELECT * FROM one table WHERE id=some number, and postgres is a lot slower
> for UPDATES (although I heard that it's faster than MySQL InnoDB)... but
> try a query with a join on few tables, even a simple one, and postgres
> will outperform mysql, sometimes by 2x, sometimes 1000 times. I had a case
> with a join between 4 tables, two of them having 50k records ; I was only
> pulling 6 records... mysql spent half a second and postgres 0.5 ms... hell
> !
Or better yet, a query like this:
select a.lt ,
b.perspective as YYY_pers,
b.averageresponsetime as YYY_aver,
b.lowestresponsetime as YYY_lowe,
b.highestresponsetime as YYY_high,
b.totalcount as YYY_tota,
c.perspective as XXX_pers,
c.averageresponsetime as XXX_aver,
c.lowestresponsetime as XXX_lowe,
c.highestresponsetime as XXX_high,
c.totalcount as XXX_tota,
d.perspective as BBB_pers,
d.averageresponsetime as BBB_aver,
d.lowestresponsetime as BBB_lowe,
d.highestresponsetime as BBB_high,
d.totalcount as BBB_tota,
e.perspective as AAA_pers,
e.averageresponsetime as AAA_aver,
e.lowestresponsetime as AAA_lowe,
e.highestresponsetime as AAA_high,
e.totalcount as AAA_tota,
f.perspective as CCC_pers,
f.averageresponsetime as CCC_aver,
f.lowestresponsetime as CCC_lowe,
f.highestresponsetime as CCC_high,
f.totalcount as CCC_tota,
g.perspective as ZZZ_pers,
g.averageresponsetime as ZZZ_aver,
g.lowestresponsetime as ZZZ_lowe,
g.highestresponsetime as ZZZ_high,
g.totalcount as ZZZ_tota
from (
select distinct date_trunc('minutes', lastflushtime) as lt from
businessrequestsummary
where lastflushtime between '2005-01-01 00:00:00' and '2005-03-31 00:00:00'
) as a
left join
( select date_trunc('minutes', lastflushtime) as lt,
perspective,
averageresponsetime,
lowestresponsetime,
highestresponsetime,
totalcount
from businessrequestsummary where perspective ='YYY'
)as b on (a.lt=b.lt)
left join
( select date_trunc('minutes', lastflushtime) as lt,
perspective,
averageresponsetime,
lowestresponsetime,
highestresponsetime,
totalcount
from businessrequestsummary where perspective ='XXX'
)as c on (a.lt=c.lt)
left join
( select date_trunc('minutes', lastflushtime) as lt,
perspective,
averageresponsetime,
lowestresponsetime,
highestresponsetime,
totalcount
from businessrequestsummary where perspective ='BBB'
)as d on (a.lt=d.lt)
left join
( select date_trunc('minutes', lastflushtime) as lt,
perspective,
averageresponsetime,
lowestresponsetime,
highestresponsetime,
totalcount
from businessrequestsummary where perspective ='AAA'
)as e on (a.lt=e.lt)
left join
( select date_trunc('minutes', lastflushtime) as lt,
perspective,
averageresponsetime,
lowestresponsetime,
highestresponsetime,
totalcount
from businessrequestsummary where perspective ='CCC'
)as f on (a.lt=f.lt)
left join
( select date_trunc('minutes', lastflushtime) as lt,
perspective,
averageresponsetime,
lowestresponsetime,
highestresponsetime,
totalcount
from businessrequestsummary where perspective ='ZZZ'
)as g on (a.lt=g.lt)
Basically, the more complex the query gets, the worse MySQL generally does,
since it's query planner
is a pretty simple rules based one.
---(end of broadcast)---
TIP 8: explain analyze is your friend
Re: [SQL] best way to auto-update a field when row is updated
On Thu, 2005-03-10 at 14:09, Henry Ortega wrote: > I have the following table > > FIELD_A| FIELD_B | TSTAMP > > x y 2005-03-10 > 14:56:47.456431 > > TSTAMP = not null default now() > > What's the best way to always auto-update TSTAMP to it's > default value whenever the row gets updated? > (e.g. update TABLENAME set FIELD_A='zzz' where FIELD_A='x'; > should automatically set TSTAMP to now) Here's a simple trigger to do that for ya. -- FUNCTION -- CREATE FUNCTION modtime () RETURNS opaque AS ' BEGIN new.lm :=''now''; RETURN new; END; ' LANGUAGE 'plpgsql'; -- TABLE -- CREATE TABLE dtest ( id int primary key, fluff text, lm timestamp without time zone ); --TRIGGER -- CREATE TRIGGER dtest BEFORE UPDATE or INSERT ON dtest FOR EACH ROW EXECUTE PROCEDURE modtime(lm); -- SQL TESTS -- INSERT INTO dtest (id, fluff) VALUES (1,'this is a test'); INSERT INTO dtest (id, fluff) VALUES (2,'this is another test'); SELECT * FROM dtest; 1 | this is a test | 2003-04-02 10:33:12.577089 2 | this is another test | 2003-04-02 10:33:18.591148 UPDATE dtest SET id=3 WHERE id=1; 3 | this is a test | 2003-04-02 10:34:52.219963 [1] UPDATE dtest SET fluff='now is the time' WHERE id=2; SELECT * FROM dtest WHERE id=2; 2 | now is the time | 2003-04-02 10:38:06.259443 [2] UPDATE dtest SET lm='2003-04-02 08:30:00' WHERE id=3; SELECT * FROM dtest WHERE id=3; 3 | this is a test | 2003-04-02 10:36:15.45687 [3] [1] The timestamp has changed for this record when we changed the id field. [2] The timestamp also changes for the fluff field. [3] We tried to set lm, but the trigger on that field in dtest intercepted the change and forced it ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] "Flattening" query result into columns
On Mon, 2005-03-21 at 15:57, Thomas Borg Salling wrote:
> I am looking for a way to âflattenâ a query result, so that rows are
> âtransposedâ into columns, just as asked here for oracle:
>
> http://groups.google.dk/groups?hl=da&lr=&client=firefox-a&rls=org.mozilla:en-US:official&selm=aad10be0.0401292322.7b6c320b%40posting.google.com
>
>
>
> Is there any way to do this with pgsql ?
Here's one from work that allows you to do the same basic thing without
a separate cross table:
select
a.lt ,
b.perspective as XYZ_pers,
b.averageresponsetime as XYZ_aver,
b.lowestresponsetime as XYZ_lowe,
b.highestresponsetime as XYZ_high,
b.totalcount as XYZ_tota,
c.perspective as ABC_pers,
c.averageresponsetime as ABC_aver,
c.lowestresponsetime as ABC_lowe,
c.highestresponsetime as ABC_high,
c.totalcount as ABC_tota
from (
select distinct date_trunc('minutes', lastflushtime) as lt from
businessrequestsummary
where lastflushtime between '2005-03-14 18:42:34' and '2005-03-21
18:42:34'
and perspective in ('XYZ','ABC')
) as a
left join (
select date_trunc('minutes', lastflushtime) as lt,
max(perspective) as perspective,
floor(avg(averageresponsetime)) as averageresponsetime,
min(lowestresponsetime) as lowestresponsetime,
max(highestresponsetime) as highestresponsetime,
sum(totalcount) as totalcount
from businessrequestsummary
where perspective ='XYZ'
group by date_trunc('minutes', lastflushtime)
) as b
on
(a.lt=b.lt)
left join (
select date_trunc('minutes', lastflushtime) as lt,
max(perspective) as perspective,
floor(avg(averageresponsetime)) as averageresponsetime,
min(lowestresponsetime) as lowestresponsetime,
max(highestresponsetime) as highestresponsetime,
sum(totalcount) as totalcount
from businessrequestsummary
where perspective ='ABC'
group by date_trunc('minutes', lastflushtime)
) as c
on
(a.lt=c.lt)
IT's generated by a script that makes it as big as we need for all the
different perspectives.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
Re: [SQL] Postgres 7.3 migrate to 8.0 date problems.
On Mon, 2005-03-28 at 13:44, Thomas Seeber wrote: > Hi, > > We were upgrading from postgres 7.3 -> 8.0 and having a little > problems importing dates from some of our data sources. Say we have a > date like '2004-17-05'. In postgres 7.3, postgres would intrept this > as Year Day Month automatically. In the documentation, from postgres > 7.4 on this has to be specified in the Datestyle option and YDM is not > an option. Other data we have is coming in on the YMD formate which > would be more expected. I realize that this change is better for data > integrity, however we have alot of legacy systems where being able to > mimic the 7.3 behaviour would be desireable. Any ideas? Fix the data? I had to write a lot of scripts to keep dates like that OUT of my last PostgreSQL installation, which was running 7.2 Which is why, as the guy who whinged and moaned until this behavioural change was made, I feel for you, but honestly, the lackadaisical manner of handing that particular format (-DD-MM) never really seemed right even to the people who fought me on the idea of changing the default behaviour of DD/MM/ versus MM/DD/. While the US uses MM/DD/ and Europe uses DD/MM/, and there may be some arguments for handling a sloppy version of one of those, computer folk (and the government) who want easily ordered dates use -MM-DD, I've never seen a good argument made for the usage of -DD-MM before. Are you sure that the other dates in your data set are what you think they are? Because if the two numbers are both <=12, then you'll get one "date" and if the wrong one is >12 you'll get another. That can't be good. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] cost of CREATE VIEW ... AS SELECT DISTINCT
On Mon, 2005-03-28 at 15:43, T E Schmitz wrote: > Hello, > How expensive would it be to maintain the following VIEW: > > CREATE VIEW origin AS SELECT DISTINCT origin FROM transaktion > > if there is in index on transaktion.origin; the table transaktion has > thousands of records and there are only a few distinct origin? The cost will only be encurred when running the view. if you want materialized views (which WOULD have maintenance costs) you'll have to do a google search for postgresql materialized views and implement them by hand, so to speak. The cost of executing that view should be the same as the cost of running the query by hand. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] getting count for a specific querry
On Fri, 2005-04-08 at 12:08, Joel Fradkin wrote: > Thanks all. > I might have to add a button to do the count on command so they don't get > the hit. > I would want it to return the count of the condition, not the currently > displayed number of rows. > > Is there any other database engines that provide better performance? > (We just 2 moths moving to postgres and it is not live yet, but if I am > going to get results back slower then my 2 proc box running MSSQL in 2 gig > and 2 processor I cant see any reason to move to it) > The Postgres is on a 4 proc Dell with 8 gigs of memory. > I thought I could analyze our queries and our config to optimize. Judging postgresql on one single data point (count(*) performance) is quite unfair. Unless your system only operates on static data and is used to mostly do things like counting, in which case, why are you using a database? PostgreSQL is a great fit for certain loads, and a poor fit for others. Are you going to have lots of people updating the database WHILE the select count(*) queries are running? Are you going to be doing other, more interesting things than simply counting? If so, you really should build a test case that emulates what you're really going to be doing with the system. I've found that the poor performance of aggregates in PostgreSQL is generally more than made up for by the outstanding behaviour it exhibits when under heavy parallel load. Note that the basic design of PostgreSQL's MVCC system is such that without using some kind of trigger to maintain pre-calculated aggregate information, it will NEVER be as fast as most other databases at doing aggregates across large chunks of your data. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] getting count for a specific querry
On Fri, 2005-04-08 at 15:23, Vivek Khera wrote: > On Apr 8, 2005, at 3:23 PM, Joel Fradkin wrote: > > > I set up the data on 4 10k scsi drives in a powervault and my wal on 2 > > 15k > > drives. I am using links to those from the install directory. It > > starts and > > stops ok this way, but maybe it should be different. > > > > Your problem might just be the choice of using a Dell RAID controller. > I have a 1 year old box connected to a 14 disk powervault (PowerEdge > 2650) and it is dog slow compared to a dual opteron with 8 disks that > is replacing it. It is all I/O for me, and the dell's just are not > known for speedy I/O. Note that there are several different RAID controllers you can get with a DELL. I had good luck with the PERC 4C (AMI MegaRAID based) at my last job. In a dual 2400Mz machine with 2 gigs ram, it handily outran a 4 way (about 1200 MHz CPUs) windows / MSSQL box with 4 gigs of ram at most tasks. Especially inserts / updates. The windows machine had the more generic PERC 3I type controller in it. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] getting count for a specific querry
On Fri, 2005-04-08 at 15:35, Bob Henkel wrote: > On Apr 8, 2005 3:23 PM, Vivek Khera <[EMAIL PROTECTED]> wrote: > On Apr 8, 2005, at 3:23 PM, Joel Fradkin wrote: > > > I set up the data on 4 10k scsi drives in a powervault and > my wal on 2 > > 15k > > drives. I am using links to those from the install > directory. It > > starts and > > stops ok this way, but maybe it should be different. > > > > Your problem might just be the choice of using a Dell RAID > controller. > I have a 1 year old box connected to a 14 disk powervault > (PowerEdge > 2650) and it is dog slow compared to a dual opteron with 8 > disks that > is replacing it. It is all I/O for me, and the dell's just > are not > known for speedy I/O. > > Vivek Khera, Ph.D. > +1-301-869-4449 x806 > > > > But that is relative I would think. Is the Dell RAID much faster than > my desktop SATA drive with no RAID? I'm by any means as knowledgeable > about I/O setup as many of you are but my 2 cents wonders if the Dell > RAID is really that much slower than a competitively priced/speced > alternative? Would Joel's problems just fade away if he wasn't using > a Dell RAID? My experience with the 3i controllers (See my earlier post) was that my old Pentium Pro200x2 machine with 512 meg ram and a generic Ultra Wide SCSI card and a half dozen drives running software RAID 5 was faster. Seriously. So was my P-II-350 desktop with the same controller, and an older Dual P-III-750 with only UltraSCSI running in a RAID-1 mirror set. The 3I is REALLY slow (or at least WAS slow) under linux. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] getting count for a specific querry
On Fri, 2005-04-08 at 15:36, Vivek Khera wrote: > On Apr 8, 2005, at 4:31 PM, Scott Marlowe wrote: > > > Note that there are several different RAID controllers you can get with > > a DELL. I had good luck with the PERC 4C (AMI MegaRAID based) at my > > > > I've had bad luck regarding speed with *all* of them, AMI MegaRAID and > Adaptec based ones, under high load. Under moderate to low load > they're acceptable. Were you using battery backed cache with write-back enabled on the 4C? The 3C is also megaraid based, but it a pretty old design and not very fast. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] getting count for a specific querry
On Fri, 2005-04-08 at 15:41, Vivek Khera wrote: > On Apr 8, 2005, at 4:35 PM, Bob Henkel wrote: > > > desktop SATA drive with no RAID? I'm by any means as knowledgeable > > about I/O > > setup as many of you are but my 2 cents wonders if the Dell RAID is > > really > > that much slower than a competitively priced/speced alternative? Would > > Joel's problems just fade away if he wasn't using a Dell RAID? > > > > "Dell RAID" is not one thing. They sell "altered" RAID cards from > Adaptec and LSI. Whatever alteration they do to them tends to make > them run not so fast. > > I have a Dell SATA RAID (adaptec based) on the office server and it is > OK, though not something I'd buy again. > > I have various PERC 3 and PERC 4 RAID controllers on my servers (SCSI > based) and they suck under heavy I/O load. > > I wonder why the name-brand LSI cards work so much faster... perhaps it > is the motherboard? I don't know, and I don't care... :-) For me, > high performance DB and Dell servers are mutually exclusive. It would be nice to be able to put a stock ami megaraid in one and see. Do you run your 2650s with hyperthreading on? I found that slowed mine down under load, but we never had more than a couple dozen users hitting the db at once, so we may well have had a different load profile than what you're seeing. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Getting the output of a function used in a where clause
Why not just do: SELECT zipcode, zipdist($lat1d,$lon1d,lat,long) as distance from zipcodes where zipdist($lat1d,$lon1d,lat,long) <= $dist;"; On Mon, 2005-04-11 at 20:25, Bill Lawrence wrote: > Boy I sure thought that would work... I received the following from postgres: > > ERROR: Attribute "distance" not found. > > Started looking into gist Looks complex. > > Any other ideas? > > > -Original Message- > From: PFC [mailto:[EMAIL PROTECTED] > Sent: Monday, April 11, 2005 1:51 AM > To: Bill Lawrence; [email protected] > Subject: Re: [SQL] Getting the output of a function used in a where clause > > > try: > > SELECT zipcode, zipdist($lat1d,$lon1d,lat,long) as distance from zipcodes > where distance <= $dist;"; > > OR you could use a gist index with a geometric datatype to get it a lot > faster. > > > On Sat, 09 Apr 2005 03:43:39 +0200, Bill Lawrence <[EMAIL PROTECTED]> > wrote: > > > HI, > > > > I'm a newbie so please bear with me. I have a function defined (got it > > from > > one of your threads... thanks Joe Conway) which calculates the distance > > between 2 zip code centeroids (in lat,long). This thing works great. > > However, I want to sort my results by distance without incurring the > > additional burden of executing the function twice. A simplified version > > of > > my current SQL (written in a perl cgi) that returns a set of zip codes > > within a given radius is: > > > > > > What I want to write is something like: > > > > $sql = "SELECT zipcode, distance from zipcodes where distance <= $dist > > order > > by distance;"; > > > > But I don't the magic SQL phrase to populate the distance variable using > > my > > nifty function. Do I need to create an output type for distance? > > > > Thanks in advance! > > > > Bill > > > > > > > > > > > ---(end of broadcast)--- > TIP 7: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] getting count for a specific querry
On Tue, 2005-04-12 at 14:29, Vivek Khera wrote: > On Apr 8, 2005, at 4:50 PM, Scott Marlowe wrote: > > > Do you run your 2650s with hyperthreading on? I found that slowed mine > > down under load, but we never had more than a couple dozen users > > hitting > > the db at once, so we may well have had a different load profile than > > what you're seeing. > > > > Yep. Turned off as per various recommendations on this list. The RAID > card on this box is a PERC 3/DC. It is a very big disappointment. The > Opteron based generic system totally outperforms this Dell box. How much memory is in the box? I've heard horror stories about performance with >2 gigs of ram, which is why I made them order mine with 2 gigs. Does the 3/DC have battery backed cache set to write back? ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] getting count for a specific querry
On Tue, 2005-04-12 at 15:32, Vivek Khera wrote: > On Apr 12, 2005, at 4:23 PM, Scott Marlowe wrote: > > > How much memory is in the box? I've heard horror stories about > > performance with >2 gigs of ram, which is why I made them order mine > > with 2 gigs. Does the 3/DC have battery backed cache set to write > > back? > > > > 4GB RAM and battery backed cache set to write-back mode. FreeBSD 4.11. If you've got the time, try running it with only 2 gigs and compare the speed. I never really got a chance to run mine with >2 gigs, but I know that I read plenty of posts at the time that the chipset in the 2650 was REALLY slow at using memory over 2 gig. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] How to install Postgres that supports 64-bit
On Fri, 2005-04-22 at 00:30, Dinesh Pandey wrote: > How to install Postgres 8.0.1 that supports 64-bit integer/date-time. > > > > # ./configure --prefix=/usr/local/pgsql > --with-tclconfig=/usr/local/lib --with-tcl > > checking build system type... sparc-sun-solaris2.8 > > checking host system type... sparc-sun-solaris2.8 > > checking which template to use... solaris > > checking whether to build with 64-bit integer date/time support... no > > checking whether NLS is wanted... no > > checking for default port number... 5432 > > checking for gcc... gcc > > > > while installation I am getting this message:â checking whether to > build with 64-bit integer date/time support... noâ > --enable-integer-datetimes enable 64-bit integer date/time support ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] pg_dump without data
On Fri, 2005-05-06 at 04:14, Kenneth Gonsalves wrote: > how do i get a dump of a postgresql database without the data? pg_dump -s for the schema pg_dumpall -g for the globals, like usernames and all. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Replacing a table with constraints
Are the constraints deferrable? If they are, then you can replace the data with a single transaction. If not, then you'll have to look at disabling triggers for the update. On Fri, 2005-05-13 at 14:55, Mark Fenbers wrote: > True, but Counties has about 8 or 9 rules, view, or pk constraints > attached to it. I don't want to break all these unless I knew of a > way to save off the SQL for them beforehand so I can easily rebuild > them... > Mark > > Ing. Jhon Carrillo wrote: > > Use Drop table YOUR_TABLE cascade > > > > Jhon Carrillo > > Ingeniero en ComputaciÃn > > Caracas - Venezuela > > > > > > > > - Original Message - > > From: Mark Fenbers > > To: [email protected] > > Sent: Friday, May 13, 2005 2:38 PM > > Subject: [SQL] Replacing a table with constraints > > > > I have a table called Counties which partially contains a > > lot bad data. By" bad data", I mean some records are > > missing; some exist and shouldn't; and some records have > > fields with erroneous information. However, the majority of > > the data in the table is accurate. I have built/loaded a > > new table called newCounties with the same structure as > > Counties, but contains no bad data. My was to completely > > replace the contents of Counties with the contents of > > newCounties. The problem is: several other tables have > > Foreign Key constraints placed on Counties. Therefore, Pg > > will not let me 'DELETE FROM Counties;", nor will it let me > > "DROP TABLE Counties;" > > > > I'm perplexed. Can someone suggest how I can best get data > > from Counties to look just like newCounties? > > > > Mark > > > > > > > > > > ---(end of > > broadcast)--- > > TIP 8: explain analyze is your friend > > > __ > ---(end of broadcast)--- > TIP 6: Have you searched our list archives? > >http://archives.postgresql.org ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Does Postgresql have a similar pseudo-column "ROWNUM" as
On Thu, 2005-05-12 at 14:07, [EMAIL PROTECTED] wrote:
> Hi:
>
> Oracle has a pseudo-column "ROWNUM" to return the sequence number in which a
> row was returned when selected from a table. The first row ROWNUM is 1, the
> second is 2, and so on.
>
> Does Postgresql have a similar pseudo-column "ROWNUM" as Oracle? If so, we
> can write the following query:
>
> select *
> from (select RowNum, pg_catalog.pg_proc.*
> from pg_catalog.pg_proc) inline_view
> where RowNum between 100 and 200;
You can get a functional equivalent with a temporary sequence:
create temp sequence rownum;
select *, nextval('rownum') as rownum from sometable;
---(end of broadcast)---
TIP 8: explain analyze is your friend
Re: [SQL] ERROR: unterminated quoted string... help
On Tue, 2005-05-17 at 13:15, Postgres Admin wrote:
> Hi
>
> I'm trying to insert encrypted data into the database and I'm noticing
> error dealing with quotes. Below is the error print out...
>
> suggestions and/or at least point me in the direction to find a solution,
>
> Thanks,
> J
>
>
>
> INSERT INTO sample.users (user_name, first_name) VALUES
> ('jokers', '=ÃÂiF!Â6(ÃÅÃÂÃËÃâ'-IwâiDÃiJÃÃâ %')
>
> Warning: pg_query() [function.pg-query]: Query failed: ERROR:
> unterminated quoted string at or near "'=ÃÂi" at character 68 in
> /usr/local/apache2/htdocs/php/5/Theirry_DB.php on line 162
Use a bytea field and use pg_escape_bytea() to prepare the data for
insertion.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Re: [SQL] Does Postgresql have a similar pseudo-column "ROWNUM" as
On Fri, 2005-05-20 at 13:27, Keith Worthington wrote: > Scott, > > I realize that this thread went off in another direction however your > suggestion proved very helpful for a problem that I was trying to solve. I > wanted the row number of a set returned by a function. Here is a chopped > version of the function that I wrote. > > CREATE OR REPLACE FUNCTION func_bom(integer, integer) > RETURNS SETOF func_bom AS SNIP > Although I have no need to limit the output I tried it just for giggles and it > worked fine. > > SELECT * FROM func_bom(12345, 1) WHERE subline between 4 AND 6; You're welcome. I've saved that off to my ~/pgsql directory for future use. So, the thanks are back to you. :) ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Tip ?
On Tue, 2005-05-24 at 13:26, Alain wrote: > This tip was at the end of a message (from Szűcs Gábor). > > > TIP 9: the planner will ignore your desire to choose an index scan if your > > joining column's datatypes do not match > > it looks very important, but I cannot understand it. Sound as a small > and easy mistake that can make things go sour... > > Can someone explain it please? Note that this is mostly fixed in V 8.0 What it means is that if you have a table with a field of type numeric, and you join another table against it with a field of type integer, the query planner won't know it can use the indexes on those two fields (assuming you DO have indexes on them) when joining and use a sequential scan all the time. Casting one field to the other's type, or changing the type of one to the other's type will allow the planner to use the index. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] SELECT very slow
On Wed, 2005-06-15 at 17:08, Thomas Kellerer wrote:
> PFC wrote on 15.06.2005 22:04:
>
> >
> >> It's not the program or Java. The same program takes about 20 seconds
> >> with Firebird and the exactly same data.
> >
> >
> > Hm, that's still very slow (it should do it in a couple seconds like
> > my PC does... maybe the problem is common to postgres and firebird ?)
> >
> > Try eliminating disk IO by writing a set returning function which
> > returns 100 rows, something simple like just a sequence number and
> > a text value... if this is slow too... i don't know...
>
> > do you have an antivirus or zonealarm or something ?
> Wouldn't that affect all DB access not only PG? And as I said, all other
>
> The 20 seconds are ok. This includes processing of the data in the
> application. If I simply loop over the result set and get each column's
> value without further processing it takes 4 seconds with Firebird.
>
> Basically I'm doing the following:
>
> rs = stmt.executeQuery("select * from foo");
> while (rs.next())
> {
>for (int i=0; i < 4; i++)
>{
> Object o = rs.getObject(i+1);
>}
> }
>
> As I said in my other post, the behaviour/performance in PG is dependent on
> the autocommit setting for the connection.
>
> With autocommit set to false the above code takes about 3 seconds in PG
> but wit autocommit set to true, PG takes 3 minutes! It seems that it also
> is very dependent on the fetchsize (apparently the number of rows that are
> cached by the driver). Anything above 100 seems to slow down the overall
> process.
There's got to be more happening than what this is showing us. A
select, and looping through it, should involve no writes, and therefore
no real performance difference from autocommit versus not. Is there
some underlying trigger on the view or something like that? Some kind
of auditing function?
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
Re: [SQL] Unique primary index?
On Tue, 2005-06-28 at 12:01, PFC wrote: > > > > What are the major differences between Unique, primary index & just > > plain index? > > > > When creating tables I nominate one of these, but not sure what the > > difference is? > index is... an index ! > > UNIQUE is an index which won't allow duplicate values (except for NULLS) > > PRIMARY KEY is exactly like UNIQUE NOT NULL, with the bonus that the > database knows this column is the primary key so you can use stuff like > NATURAL JOIN without telling which column you want to use. No. natural join joins on columns with the same name. However, primary keys ARE automatically referred to by fk'd columns. Personally, I think that a natural join should use the primary/fk relationship as well, but I guess that's not what the spec says. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] ENUM like data type
On Tue, 2005-06-28 at 13:22, Martín Marqués wrote: > El Mar 28 Jun 2005 13:58, PFC escribió: > > > > >> Here is where I get uncertain as to if this is possible. My idea is to > > >> create a pseudo type that triggers the creation of it's lookup tables > > >> the same way the SERIAL type triggers creation of a sequence and returns > > >> an int with the right default value. > > > > Personnally I use one table which has columns (domain, name) and which > > stores all enum values for all different enums. > > I have then CHECK( is_in_domain( column, 'domain_name' )) which is a > > simple function which checks existence of the value in this domain (SELECT > > 1 FROM domains WHERE domain=$2 AND name=$1 LIMIT 1) for instance. > > You can also use integers. > > I personally think that the ENUM data type is for databases that are not well > designed. So, if you see the need for ENUM, that means you need to re-think > your data design. I would generally agree, however, a pseudo type like serial that created the properly fk'd table with all the options in it would be quite nice to have. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] ORDER records based on parameters in IN clause
On Wed, 2005-06-29 at 09:22, Russell Simpkins wrote: > fair enough. but a simple order by id would never work. > Try this: select *, case when id=2003 then 1 when id=1342 then 2 when id=799 then 3 when id=1450 then 4 end as ob from tablename where id in (2003,1342,799,1450) order by ob; ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] getting back autonumber just inserted
On Thu, 2005-07-07 at 15:14, Theodore Petrosky wrote:
> you have to use currval inside a transaction...
>
> begin;
> insert something that increments the counter;
> select currval('sequence_name');
> end;
>
> using currval inside a transaction guarantees that the
> value is correct for your insert statement and has not
> changed by another insert statement.
Actually, whether you use it inside a transaction or not, as long as
it's in the same session / connection, it is guaranteed to "do the right
thing."
Test it, you'll see.
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
Re: [SQL] Generating a range of integers in a query
On Wed, 2005-07-13 at 04:13, Aaron Bingham wrote: > Hello, > > I've got an interesting problem: I need to select all possible values > of an attribute that do /not/ occur in the database. > > This would be easy (in my case at least) if there were a way to > generate a table containing all integers between 1 and n, where n is > the result of a subquery. In my case, n will be at most a few > hundred. I would like to be able to generate this table as a > subquery. Any ideas? Take a look here: http://www.postgresql.org/docs/8.0/interactive/functions-srf.html specifically, the generate_series() function. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] how to do a select * and decrypt a column at the same
On Tue, 2005-08-16 at 14:53, The One wrote: > Hello, > > I have a table with one encrypted column. > How can I do a select statement such that it will select all columns > from the table and at the same time will decrypt it too? A view should be able to do that... ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] booleans and nulls
On Sat, 2005-08-20 at 21:25, Matt L. wrote: > Out of curiousity, > > 1. Does a boolean column occupy 1byte of disk whether > or not the value is null or not? No. Nulls are stored, one bit per, to a byte at a time. I.e. if you have 8 null fields, they are stored in the same byte. > 2. Is matching on IS NULL or = 0 more efficient? Generally, =0 is EASIER to implement. This is because IS NULL is not directly indexable. At least it wasn't in the past. 8.0 or 8.1 may have made it so. The reason for this is that indexing requires a list of operators, and IS NULL isn't really an operator, but syntax, so it doesn't map to an operator like = < > >= etc... However, you can index on partials, so it was possible to make an index that was like this: create index a_dx on a (boofield) where boolfield IS NULL and then IS NULL was indexed. But at the cost of an additional index to maintain. OTOH, if most of the fields are not null, and the occasional null is what you're looking for, then it's a good idea. If you've got 50/50 distribution of nulls and not nulls, indexing on nulls makes no sense, since you'll never actually use the index because it will always be cheaper to pull by seq scan, except in some clustered index situs. > 3. If I ix'd columns w/ null does postgres know > whatevers not indexed is null or would their be no > point? Actually, it indexes the nulls, it just has a hard time using the index due to the above operator mapping issue. To use the index with NULL / NOT NULL requires the above mentioned partial index. > I currently utilize null fields as 'not a value' has > meaning in a program i've been working on as I don't > want to put false in every column when i only need a > couple with a true/false value. That's a good idea, as it saves space as well. That's pretty much what NULL was meant for. > I'm not joining tables on NULLS, just filtering w/ > them. Then look at the index / where IS (NOT) NULL solution, and let us know how that works. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] insert only if conditions are met?
On Wed, 2005-08-31 at 14:54, Jim C. Nasby wrote: > SELECT sum(hours) FROM table WHERE emp_name = 'JSMITH' AND work_date = > '8-15-2005'::date will give you the hours. So... > > INSERT INTO table > SELECT blah > WHERE (SELECT sum(hours) FROM table WHERE emp_name = 'JSMITH' AND > work_date = > '8-15-2005'::date) != 8 > > Should do what you want. I keep thinking a constraint it a better idea here. Or is this too complex to make it one easily. Either way, I'd make that last part: <8 instead of !=8, in case this uses floating point instead of int, and gets 8.01 or what not in it. Or if someone manages to get it up to 9 or something. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Why doesn't the SERIAL data type automatically have a
On Mon, 2005-09-26 at 20:03, Tom Lane wrote: > Ferindo Middleton Jr <[EMAIL PROTECTED]> writes: > > Is there some reason why the SERIAL data type doesn't automatically have > > a UNIQUE CONSTRAINT. > > It used to, and then we decoupled it. I don't think "I have no use for > one without the other" translates to an argument that no one has a use > for it ... I have to admit, right after the change was made, I was of the opinion that no one would ever need that. Then, a few months later, it was exactly what I needed for some project... :) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Selecting count of details along with details columns
On Thu, 2005-09-29 at 14:08, Axel Rau wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > Exactly, that query works as I expected. Thank you. > Can you answer this question as well: > > Looking for a workaround, I learned that aggregate functions are not > > allowed in WHERE clauses. > > Question: Is this conform with the standard? The problem is that where clauses "fire" before group by does, so the aggregate doesn't exist yet when the where clause would be doing it's selection. Which is why the spec, and postgresql, have the "having" clause... See this part of the docs: http://www.postgresql.org/docs/8.0/interactive/sql-select.html ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] creating postgres tables by passing a string to function
I am storing data separated by month. How do I perform Inserts and Selects based on something like this. select * from (select 'shipped' || '0509') AS aShippingTable The table 'shipped' || '0509' exists. I am scanning barcode labels which have the '0509' on them and I want to move the data to appropiate monthly tables. '0510' would represent shipped on 2005 october. Eventually I want to create, insert, select all based on passing a string such as '0509' to a function. I am a newbie. Thanks in advance -Scott ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Selecting records not present in related tables
On Thu, 2005-10-06 at 14:43, Hector Rosas wrote: > Hello, I'm trying to select records in a table not present in a > related table, in example, I've a table with message information > (subject, message, date, etc) and another (usermessages) with where > user(s) has that message, its state, etc. Records in this table will > be deleted in a certain time (just some extra info). > I want to select messages records that aren't present in the other > table (usermessages), I got the next two queries, maybe someone can > suggest a better one. A fairly common way to do this is to use a left join and a not null: select a.id from tablea a left join tableb b on (a.id=b._aid) where b._aid IS NULL ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] pg, mysql comparison with "group by" clause
On Tue, 2005-10-11 at 16:12, Rick Schumeyer wrote: > I'm not sure what I was thinking, but I tried the following query in pg: > > SELECT * FROM t GROUP BY state; > > pg returns an error. > > Mysql, OTOH, returns the first row for each state. (The first row with > "AK", the first row with "PA", etc.) > > I'm no SQL expert, but it seems to me that the pg behavior is correct, and > the mysql result is just weird. Am I correct? Yes, you are correct. The SQL standard is quite clear that in order to appear in the select list, an entry must either be in the group by or be operated upon by an aggregate function. PostgreSQL supports this same action by way of the non-standard select distinct on(), from Since you don't know for sure which answer you'll get each time, it's better to KNOW you're doing something that may not be reproduceable than to accidentally do it when your database SHOULD be throwing an error. That's just one of many many things MySQL does that makes my head hurt. For more, search google for "mysql gotchas" ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] pg, mysql comparison with "group by" clause
On Wed, 2005-10-12 at 16:54, Greg Stark wrote: > Stephan Szabo <[EMAIL PROTECTED]> writes: > > > On Tue, 11 Oct 2005, Rick Schumeyer wrote: > > > > > I'm not sure what I was thinking, but I tried the following query in pg: > > > > > > SELECT * FROM t GROUP BY state; > > > > > > pg returns an error. > > > > > > Mysql, OTOH, returns the first row for each state. (The first row with > > > "AK", the first row with "PA", etc.) > > > > > > I'm no SQL expert, but it seems to me that the pg behavior is correct, and > > > the mysql result is just weird. Am I correct? > > > > In your case, it sounds like the mysql result is wrong. I believe SQL99 > > would allow it if the other columns were functionally dependant upon state > > (as there'd by definition only be one value for the other columns per > > group). > > I believe this is a documented feature. Hehe. When I turn on my windshield wipers and my airbag deploys, is it a documented "feature" if the dealership told me about this behaviour ahead of time? In much the same way, while this behaviour may be documented by MySQL, I can't imagine it really being called a feature. But at least this misbehaviour is documented. However, I think most people in the MySQL universe just stumble onto it by accident when they try it and it works. I'd at least prefer it to throw a warning or notice or something. > MySQL treats "select a,b from t group by a" equivalently to Postgres's > "select distinct on (a) a,b from t" > > I suppose "equivalent" isn't quite true. It's more general since it allows > aggregate functions as well. The equivalently general Postgres syntax is to > have a first() aggregate function and do "select a,first(b) from t group by > a". A Subselect would let you do such a thing as well, and while it's more complicated to write, it is likely to be easier to tell just what it's doing. > I'm sure it's very convenient. Not always, but I'd rather get the right answer with difficulty than the wrong one with ease. :) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] pg, mysql comparison with "group by" clause
On Wed, 2005-10-12 at 20:13, Greg Stark wrote: > Scott Marlowe <[EMAIL PROTECTED]> writes: > > > Hehe. When I turn on my windshield wipers and my airbag deploys, is it > > a documented "feature" if the dealership told me about this behaviour > > ahead of time? > > Well it's more like my car where the dashboard dims when I turn on my > headlights which annoys me to no end since I learned to always put my > headlights on even in the day. Sorry, but it's worse than that. It is quite possible that two people could run this query at the same time and get different data from the same set and the same point in time. That shouldn't happen accidentally in SQL, you should know it's coming. But it's not the same as the air bag deploying, it's like a different random part of my car activates / deactivates each time. The horn, a turn indicator, the trunk opens. > > In much the same way, while this behaviour may be documented by MySQL, I > > can't imagine it really being called a feature. But at least this > > misbehaviour is documented. However, I think most people in the MySQL > > universe just stumble onto it by accident when they try it and it works. I'd > > at least prefer it to throw a warning or notice or something. > > I don't see why you think people stumble on this by accident. I think it's > actually an extremely common need. So common that Postgres has the same > feature (though less general) and invented a whole syntax to handle it. Because I answer a boatload of questions on phpbuilder, where there are tons of MySQL and PostgreSQL beginners who learn by stumbling around in their database of choice. Most MySQL users think that the select a,b,c from table group by a is a valid query, and don't even realize that they are getting theoretically different results each time. It's one of those many things they learn wrong on MySQL and have to unlearn everywhere else. They didn't go looking for this behaviour, and almost none of them realized when they were doing it that they could get a different answer each time. > I think most MySQL users don't stumble on it, they learn it as the way to > handle the common use case when you join a master table against a detail table > and then want to aggregate all the detail records. This isn't really common sense all the time though. It's more about the law of unintended consequences. People write these queries, and never realize that they are actually random responses coming back. And if they aren't random responses, then their data likely isn't normalized. > In standard SQL you have to > write GROUP BY ... and list every single column you need from the master > table. Forcing the database to do a lot of redundant comparisons and sort on > uselessly long keys where in fact you only really need it to sort and group by > the primary key. But again, you're getting whatever row the database feels like giving you. A use of a simple, stupid aggregate like an any() aggregate would be fine here, and wouldn't require a lot of overhead, and would meet the SQL spec. The real reason this thing exists today and not an any() aggregate or some equivalent in MySQL is because of all the legacy code using the messed up group by syntax. It's hard to change that kind of stuff when you've got a lot of market share to hold on to. > Remember, most MySQL users learn MySQL first, and only later learn what is > standard SQL and what isn't. Hehe. I'll never forget, remember, I answer questions on databases on phpbuilder. I'm always amazed at how many folks come there who are just starting out and making the same mistakes we all made when beginning. > > A Subselect would let you do such a thing as well, and while it's more > > complicated to write, it is likely to be easier to tell just what it's > > doing. > > Subselects have their own problems here. Mainly Postgres's optimizer, as good > as it is, doesn't treat them with the same code paths as joins and can't find > all the same plans for them. But in any case you cannot always write a > subselect that's equivalent to an arbitrary join. Actually, for things like aggregates, I've often been able to improve performance with sub selects in PostgreSQL. Although, back in the 7.2 days it was still pretty pokey at that kind of stuff. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] pg, mysql comparison with "group by" clause
On Thu, 2005-10-13 at 13:26, Greg Stark wrote: > Scott Marlowe <[EMAIL PROTECTED]> writes: > > > Sorry, but it's worse than that. It is quite possible that two people > > could run this query at the same time and get different data from the > > same set and the same point in time. That shouldn't happen accidentally > > in SQL, you should know it's coming. > > I'm pretty unsympathetic to the "we should make a language less powerful and > more awkward because someone might use it wrong" argument. I'm in favor of getting the right answer. If my database can't do that, then it's not worth the bits to download it. But I'm funny that way. > > > In standard SQL you have to > > > write GROUP BY ... and list every single column you need from the master > > > table. Forcing the database to do a lot of redundant comparisons and sort > > > on > > > uselessly long keys where in fact you only really need it to sort and > > > group by > > > the primary key. > > > > But again, you're getting whatever row the database feels like giving > > you. A use of a simple, stupid aggregate like an any() aggregate would > > be fine here, and wouldn't require a lot of overhead, and would meet the > > SQL spec. > > Great, so I have a user table with, oh, say, 40 columns. And I want to return > all those columns plus their current account balance in a single query. > > The syntax under discussion would be: > > select user.*, sum(money) from user join user_money using (user_id) group by > user_id > You would prefer: > > select user_id, >any(username) as username, any(firstname) as firstname, >any(lastname) as lastname, any(address) as address, >any(city) as city, any(street) as street, any(phone) as phone, >any(last_update) as last_update, any(last_login) as last_login, >any(referrer_id) as referrer_id, any(register_date) as register_date, >... >sum(money) as balance, >count(money) as num_txns > from user join user_money using (user_id) group by user_id I's select the SINGLE entries from a child table that matched the parent id and add the sum(money) to it. Then, there'd be no need for aggregate functions on those fields, or inaccurate / possibly random data. But I'm funny that way. > Having a safeties is fine but when I have to disengage the safety for every > single column it starts to get more than a little annoying. > > Note that you cannot write the above as a subquery since there are two > aggregates. You could write it as a join against a view but don't expect to > get the same plans from Postgres for that. I'd just write is a big join. Again, getting the right answer is important to me. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL]
On Thu, 2005-10-13 at 10:31, Shavonne Marietta Wijesinghe wrote: > how can i do a query with 2 databases?? This is only supported by an add on called dblink, and it's a little bit klunky. Could schemas solve your problem? ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Blank-padding
On Sat, 2005-10-22 at 00:39, Chris Travers wrote: > Tom Lane wrote: > > >"Dean Gibson (DB Administrator)" <[EMAIL PROTECTED]> writes: > > > > > >>I remember that discussion, and I was for the change. However, upon > >>doing some testing after reading the above, I wonder if the > >>blank-stripping isn't too aggressive. I have a CHAR(6) field (say, > >>named Z) that has "abc " in it. Suppose I want to append "x" to Z, > >>with any leading spaces in Z PRESERVED. > >> > >> > > > >(You meant trailing spaces, I assume.) Why exactly would you want to do > >that? You decided by your choice of datatype that the trailing spaces > >weren't significant. > > > I once built a telecom billing app where this might be important (fixed > length fields). Lets say you have fixed length fields defined as > char(n) datatypes. You may want to build a query to generate billing > records like: > select field1 || field2 || field3 || field4 || field5 ... AS bill_record > FROM lec_billing_entries; > > It seels to me that I would expect trailing spaces to be preserved in > these cases. Having an implicit rtrim function is asking for problems. > Personally I would rather have to call rtrim explicitly than have the > backend treat the concatenation differently than if I do it on the client. If I rememberate correctificantly, this problem is or was caused by the || operator using the same internal logic for varchar / text AND char. Tom, is it reasonable / doable to have || have a different set of internal functions for each of those types. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] why vacuum
On Tue, 2005-10-25 at 23:45, Kenneth Gonsalves wrote: > hi, > i was in a minor flame war with a mysql guy - his major grouse was that > 'I wouldnt commit mission critical data to a database that needs to be > vacuumed once a week'. So why does pg need vacuum? The absolutely funniest thing about what this guy is saying is that he seems rather ignorant of the behaviour of innodb tables. They have another name for the vacuum command there. It's: ALTER TABLE tbl_name ENGINE=INNODB Which rebuilds the whole fraggin's table, with an exclusive lock. and guess what innodb does if you don't run this command every so often? Can you guess yet? Yep, that's right, it just keeps growing and growing and growing. Hell, innodb isn't any better than the original mvcc implementation postgresql had when vacuums were all full and took exclusive locks. But at least with PostgreSQL it was a well documented issue, and was mentioned in the administrative section of the docs, so you knew you had to do it. It's kind of tucked away in the innodb section of the mysql docs, and most mysql folks don't even know they need to do it, since they almost all use myisam table types. If someone is more worried about postgresql's non-blocking, easily scheduled vacuuming, but is using myisam tables, and a database that by default allows numeric overflows to just insert the maximum possible value, I wouldn't trust them with handling my paycheck with their fan-boy database. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] why vacuum
On Tue, 2005-10-25 at 23:45, Kenneth Gonsalves wrote: > hi, > i was in a minor flame war with a mysql guy - his major grouse was that > 'I wouldnt commit mission critical data to a database that needs to be > vacuumed once a week'. So why does pg need vacuum? Oh man oh man. After reading the article, I realized he was saying that he wouldn't trust PostgreSQL to replace Oracle. He apparently wouldn't trust MySQL to replace oracle either. But, the next time someone says that slony is a toy add on, and MySQL has REAL replication, point them to THIS page on the same blog: http://ebergen.net/wordpress/?p=70 In short, it basically shows that MySQL replication is incredibly fragile, and not fit for production on any real system. The lack of system wide transaction support, like postgresql has, makes the problem he outlines that much worse. The hoops people will jump through to use their favorite toys... ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Combining two SELECTs by same filters
On Wed, 2005-10-26 at 10:16, Volkan YAZICI wrote:
> => SELECT
> -> (SELECT count(id) FROM sales
> -> WHERE id = 2
> -> AND date_trunc('hour', dt) = '2005-10-25 21:00:00'),
> -> (SELECT count(id) FROM sales
> -> WHERE id = 2
> -> AND date_trunc('hour', dt) = '2005-10-25 22:00:00');
> ?column? | ?column?
> --+--
> 6 |2
> (1 row)
>
> Isn't it possible to combine these two SELECTs as one. Because one of
> their filters are same: id = 2. I'm just trying to avoid making 2
> scans with nearly same filters.
Do something like this:
select count(id)
from sales
where id=2 and
dt between 'firstdatehere' and 'lastdatehere'
group by date_trunc('hour', dt);
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Re: [SQL] why vacuum
On Wed, 2005-10-26 at 11:09, Jan Wieck wrote: > On 10/26/2005 11:19 AM, Scott Marlowe wrote: > > > On Tue, 2005-10-25 at 23:45, Kenneth Gonsalves wrote: > >> hi, > >> i was in a minor flame war with a mysql guy - his major grouse was that > >> 'I wouldnt commit mission critical data to a database that needs to be > >> vacuumed once a week'. So why does pg need vacuum? > > > > Oh man oh man. After reading the article, I realized he was saying that > > he wouldn't trust PostgreSQL to replace Oracle. He apparently wouldn't > > trust MySQL to replace oracle either. > > > > But, the next time someone says that slony is a toy add on, and MySQL > > has REAL replication, point them to THIS page on the same blog: > > > > http://ebergen.net/wordpress/?p=70 > > > > You must have missed the FAQ and other side notes about replication in > the MySQL manual. Essentially MySQL replication is nothing but a query > duplicating system, with the added sugar of taking care of now() and > some other non-deterministic things, but not all of them. > > Non-deterministic user defined procedures, functions and triggers will > simply blow MySQL's sophisticated replication apart. True, but I never expected a CTRL-C to the mysql command line to break replication. Even for MySQL's lackadaisical behaviour, that's pretty far out. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] why vacuum
On Wed, 2005-10-26 at 11:12, Tom Lane wrote: > Scott Marlowe <[EMAIL PROTECTED]> writes: > > On Tue, 2005-10-25 at 23:45, Kenneth Gonsalves wrote: > >> i was in a minor flame war with a mysql guy - his major grouse was that > >> 'I wouldnt commit mission critical data to a database that needs to be > >> vacuumed once a week'. So why does pg need vacuum? > > > Oh man oh man. After reading the article, I realized he was saying that > > he wouldn't trust PostgreSQL to replace Oracle. > > Well, that's a slightly more respectable point of view, but Oracle has > surely got its own set of gotchas ... doesn't it still have issues if > you run a transaction that's large enough to overrun the fixed-size > rollback areas (or whatever they call them)? Yep, and it also has the "snapshot too old" issue for long running transactions (long running meaning how many other things have happened since it started, not length of time). Imagine starting a backup in postgresql, getting an hour into it and suddenly it ends because 10,000,000 rows have been inserted while it was going. That kind of error. My reply on that forum pointed out that EVERY database has gotchas, and to pretend that your database's gotchas are ok but some other database's gotchas are unacceptable is disingenuous at best. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] child fk problem
On Wed, 2005-11-30 at 12:42, Luis Silva wrote: > I there, I'm trying to work with postgre, but i'm having a problem > with inherits. I have a table (parent) that as an fk to another table. > When i create a child, i loose the connection to the other table. i > dont need to insert values in the parent table. what do i need to do?? > tks in advance Neither indexes, nor triggers, nor foreign keys are inherited by child tables. only the base structure of the parent. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] DB design and foreign keys
On Tue, 2005-12-13 at 12:16, Gianluca Riccardi wrote: > hello all, > i'm usign PostgreSQL 7.4.7 in a Debian 3.1 > > following is the SQL schema of my (very)small DB for a (very small)web > business application: > CREATE TABLE orders ( >id serial, >order_code serial, >customer_code integer REFERENCES customers (customer_code) NOT NULL, >order_date time without time zone NOT NULL, >remote_ip inet NOT NULL, >order_time timestamp with time zone NOT NULL, >order_type varchar(10) NOT NULL, >state varchar(10) NOT NULL, >PRIMARY KEY (id, order_code) > ); Given this table layout, I'm gonna take a wild guess and ask if you're coming from MySQL and expecting the second serial order_code to be a sub-autoincrement to id? If so, it won't be. That's a mysqlism. If you want something similar, you'll have to implement it yourself, and note that such a thing tends to be a poor performer with lots of parallel updates, and it can also be susceptible to race conditions if no locking is used. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[SQL] JOIN question with multiple records
I have 2 tables. One containing information about servers, and the other containing information about IP addresses. E.G. Server table: namemac mac2 - SERVER1 00:0d:56:ba:ad:92 SERVER2 00:0d:56:ba:ad:93 00:0d:56:ba:ad:96 SERVER3 00:0d:56:ba:ad:94 SERVER4 00:0d:56:ba:ad:95 00:0d:56:ba:ad:97 Address table: ip mac 10.0.0.100:0d:56:ba:ad:92 10.0.0.200:0d:56:ba:ad:92 10.0.0.300:0d:56:ba:ad:94 10.0.0.400:0d:56:ba:ad:95 I need a query that will return all the IP addresses that match a server's mac address along with the rest of the information about the server in this format: name ip mac mac2 --- SERVER1 10.0.0.100:0d:56:ba:ad:92 10.0.0.2 SERVER2 00:0d:56:ba:ad:93 00:0d:56:ba:ad:96 SERVER3 10.0.0.300:0d:56:ba:ad:94 SERVER4 10.0.0.400:0d:56:ba:ad:95 However, the best that I've done is to return a record of the server for every IP that matches in the address table: name ip mac mac2 --- SERVER1 10.0.0.100:0d:56:ba:ad:92 SERVER1 10.0.0.200:0d:56:ba:ad:92 SERVER2 00:0d:56:ba:ad:93 00:0d:56:ba:ad:96 SERVER3 10.0.0.300:0d:56:ba:ad:94 SERVER4 10.0.0.400:0d:56:ba:ad:95 An abbreviate version of query looks like this. Ultimately there will be a WHERE condition on the query: SELECT servers.name,addresses.ipaddr,servers.application_mgr FROM servers LEFT JOIN addresses ON addresses.mac = servers.mac OR addresses.mac = servers.mac2 Does anyone have any suggestions on this? Regards, Casey ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Update in all tables
On Wed, 2006-02-22 at 12:08, Judith wrote: >Hello everybody I need to update a field with the same value in the > tables of my data base but this field exists in almost all tables and > has the same value, I don't want to code a script, so my question is if > there is some way to update that field with a query and affects all the > tables that contain the field? Are those fields all dependent FK fields? If so, then declaring those foreign keys as on update cascade is all you need. If they're not related that way, then you'll have to script it. If you need them all to change at the same time (or all roll back in the event of an update failure) then you can wrap the changes in a transaction (begin/end pair) and it'll be atomic. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Update in all tables
On Wed, 2006-02-22 at 15:13, Andrew Sullivan wrote: > On Wed, Feb 22, 2006 at 11:59:06AM -0600, Judith Altamirano Figueroa wrote: > > Hello everybody I need to update a field with the same value in the > > tables of my data base but this field exists in almost all tables and > > has the same value, I don't want to code a script, so my question is if > > there is some way to update that field with a query and affects all the > > tables that contain the field? > > No, sorry. While we're at it, though, if you have the same field in > several tables, it's a good sign that your database is badly > normalised. You shouldn't have to update more than one table (which > is why there isn't a way to do this automatically). Unless they're all FK/PK pairs, in which case you on update / delete cascade FKs. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Replication - state of the art?
On Wed, 2006-03-01 at 11:51, Bryce Nesbitt wrote: > I'm interested in creating a mirror database, for use in case one our > primary machine goes down. Can people here help sort out which of the > several replication projects is most viable? > > As far as I can tell, the winner is slony1 at > http://gborg.postgresql.org/project/slony1/projdisplay.php , but there > are several contenders. > >-- > > The needs here are very relaxed. We have one main postmaster which runs > 24x7x365. There's another machine nearby that can accept a log or > journal of some sort. The alternate never needs DB access, except in > case of main machine failure, and then we can take up to 15 minutes to > switch over and rebuild the DB. "No-lost transaction" is far more > important than switch time. > > Anyone here using replication or transaction journaling? Has it proved > reliable, easy to maintain? You might want to look at pgpool in mirror replication mode as well. It's got some limitations due to it's query shipping nature, but may give you what you need. It knows how to switch off from the dead server and keep running on the one good one. It's solid software though. Mammoth may be a better option for you. It's not that horribly expensive, and setup is supposed to be a snap. If you use slony, you might want to look at frontending it with pgpool which makes switching the servers around a little easier, as you can do it in pgpool instead of your app layer. Lots of choices. Hard to say which is right. I really like slony, and use it at work. I'm quite happy with it. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Interval subtracting
On Wed, 2006-03-01 at 14:18, Bruce Momjian wrote: > Stephan Szabo wrote: > > > justify_days doesn't currently do anything with this result --- it > > > thinks its charter is only to reduce day components that are >= 30 days. > > > However, I think a good case could be made that it should normalize > > > negative days too; that is, the invariant on its result should be > > > 0 <= days < 30, not merely days < 30. > > > > What about cases like interval '1 month -99 days', should that turn into > > interval '-3 mons +21 days' or '-2 mons -9 days'? > > I think it should be the later. It is best to have a single sign, and I > think it is possible in all cases: > > '2 mons -1 days' > > could be adjusted to '1 mons 29 days'. There's a part of me that thinks the WHOLE THING should be positive or negative: -(2 months 1 day) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [SQL] Interval subtracting
On Wed, 2006-03-01 at 14:27, Bruce Momjian wrote: > Scott Marlowe wrote: > > On Wed, 2006-03-01 at 14:18, Bruce Momjian wrote: > > > Stephan Szabo wrote: > > > > > justify_days doesn't currently do anything with this result --- it > > > > > thinks its charter is only to reduce day components that are >= 30 > > > > > days. > > > > > However, I think a good case could be made that it should normalize > > > > > negative days too; that is, the invariant on its result should be > > > > > 0 <= days < 30, not merely days < 30. > > > > > > > > What about cases like interval '1 month -99 days', should that turn into > > > > interval '-3 mons +21 days' or '-2 mons -9 days'? > > > > > > I think it should be the later. It is best to have a single sign, and I > > > think it is possible in all cases: > > > > > > '2 mons -1 days' > > > > > > could be adjusted to '1 mons 29 days'. > > > > There's a part of me that thinks the WHOLE THING should be positive or > > negative: > > > > -(2 months 1 day) > > But it isn't '-2 months, -1 day'. I think what you are saying is what I > am saying, that we should make the signs consistent. Pretty much. It just seems wrong to have different signs in what is essentially a single unit. We don't say 42 degrees, -12 minutes when measuring arc, do we? Then again, maybe some folks do. It just seems wrong to me. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] pg_dump and diffrent sizes
On Fri, 2006-03-03 at 14:10, Maciej Piekielniak wrote: > Hello pgsql-sql, > > I dump db with pg_dump v.8.1.3 on database postgresql server 7.4.7. > Data directory with my db on pg 7.4.7 had 1,8GB and > file with dump had 2,7GB. > Database have blob fields. > > When I restore db on pg 8.1 - data directory have only 1GB, why? I would guess that you have lots of highly compressible text entries, and postgresql is compressing them for you. The difference in size between 7.4 and 8.1 is likely due to unrecycled tuples in the data store. Even with regular vacuuming, some small percentage of dead tuples is normal. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [SQL] Interval subtracting
On Wed, 2006-03-08 at 06:07, Markus Schaber wrote: > Hi, Scott, > > Scott Marlowe wrote: > > >>But it isn't '-2 months, -1 day'. I think what you are saying is what I > >>am saying, that we should make the signs consistent. > > Pretty much. It just seems wrong to have different signs in what is > > essentially a single unit. > > > > We don't say 42 degrees, -12 minutes when measuring arc, do we? Then > > again, maybe some folks do. It just seems wrong to me. > > But we say "quarter to twelve", at least in some areas on this planet. > > The problem is that months have different lengths. '2 months - 1 day' > can be '1 month 27 days', '1 month 28 days', '1 month 29 days' or '1 > month 30 days', depending on the timestamp we apply the interval. I made this point before. In the military they say 1145 or 2345 instead of quarter to twelve, because 1: there are two "quarter to twelves" a day, and 2: It's easy to get it confused. For same reasons, i.e. a need for precision, I find it hard to accept the idea of mixing positive and negative units in the same interval. The plus or minus sign should be outside of the interval. Then, it's quite certain what you mean. If you say select '2006-06-12'::date - interval '1 month 2 days' there is no ambiguity. If you say: select '2006-06-12'::date + interval '-1 month -2 days' do you mean (1 month - 2 days) subtracted from the date, or do you mean to subtract 1 month, then 2 days from the date? Putting the + or - outside the interval seems to make the most sense to me. Allowing them inside makes no sense to me. And colloquialisms aren't really a good reason. :) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[SQL] Savepoint/Rollback in functions
I am running postgres 8.1.2 on both a windows and Linux platform. The primary method of managing business logic is through the use of functions. Our primary access to the database is by using PSQL (pg/psql). The problem that I am having is that we need to execute and implement "SAVEPOINT" and "ROLLBACK" functionality inside of FUNCTIONS. The "SAAVEPOINT/ROLLBACK" works fine interactively. At the present time the documentation does not seem to be definitive on the functionality of this, BUT, we placed "SAVEPOINT/ROLLBACK" commands in a test function and the function failed. Question 1: does pg/psql functions allow "SAVEPOINT/ROLLBACK" functionality? (if so how?) Question 2: if the answer to question 1 is "NO", what would be required to get this implemented in the standard product? Thanks in advance for any information and help. Scott. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] have you feel anything when you read this ?
On Mon, 2006-03-20 at 02:06, Eugene E. wrote: > http://dev.mysql.com/doc/refman/5.0/en/news-5-0-19.html > --- cut --- > mysql no longer terminates data value display when it encounters a NUL > byte. Instead, it displays NUL bytes as spaces. (Bug #16859) > --- cut --- Everyone here realizes that this is a mysql CLIENT bug, not server side. i.e. if you're writing an application and request that binary text string, you'll get it with nuls in it, just like you put in. Now, I'm not sure that turning nulls into spaces is the best way to handle this in the client. In fact, I'm sure it's not. But this is not a server bug, it's a client bug. Just FYI. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Referential integrity broken (8.0.3), sub-select help
On Tue, 2006-03-21 at 08:58, [EMAIL PROTECTED] wrote: > Hello, > > I've got 2 tables, "url" (U), and "bookmark" (B), with "bookmark" pointing to > "url" via FK. > Somehow I ended up with some rows in B referencing non-existent rows in U. > This sounds super strange and dangerous to me, and it's not clear to me > how/why PG let this happen. > I'm using 8.0.3. > > Here are the table references I just mentioned: > > Table "bookmark": > id SERIAL > CONSTRAINT pk_bookmark_id PRIMARY KEY > > Table "url": > url_id INTEGER > CONSTRAINT fk_bookmark_id REFERENCES bookmark(id) > > > Problem #1: Strange that PG allowed this to happen. Maybe my DDL above > allows this to happen and needs to be tightened? I thought the above would > ensure referential integrity, but maybe I need to specify something else? Assuming you didn't do something like turning off all triggers at some point, the other common cause of this kind of thing is bad hardware (CPU, memory, hard drive, etc...) so test your hardware. Any machine going into production as a database server should be heavily tested to ensure that it has good hardware. No database management program can be expected to overcome broken hardware or OSes. Good tools for testing are memtest86 and doing a fdisk with the badblocks option (in linux, not sure what the name is in bsd, but I'm sure it has some kind of block tester in there somewhere.) You can also write your own scripts to test a drive by writing the same semi-random byte sequence to the drive, filling it up, then reading it back and comparing them. All zeros and all ones is a good test, and there are patterns that tend to show problems. Generally, most drives that have problems will show them rather quickly in testing, with bad blocks flying by by the hundreds. But sometimes, it's just one block causing a problem. > Problem #2: I'd like to find all rows in B that point to non-existent rows in > U. I can do it with the following sub-select, I believe, but it's rather > inefficient (EXPLAIN shows both tables would be sequentially scanned): > > SELECT * FROM bookmark WHERE id NOT IN (SELECT b.id FROM bookmark b, url u > WHERE b.url_id=u.id); Generally a left join with not null in the where clause is more efficient: select a.id from table1 a left join table2 b on (a.id=b.aid) where b.aid is null; will show you all the rows in table1 that have no match in table2 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] generate_series to return row that doesn't exist in
On Fri, 2006-03-24 at 14:30, MaXX wrote: > Hi, > > I have a table wich contains aggregated data, > table stats_activity > logtime timestamptz, > count int > > given this dataset > "2006-03-24 03:00:00+01";55 > "2006-03-24 04:00:00+01";33 > "2006-03-24 06:00:00+01";46 > "2006-03-24 07:00:00+01";63 > "2006-03-24 08:00:00+01";88 > > I want to get this in order to plot the data > "2006-03-24 03:00:00+01";55 > "2006-03-24 04:00:00+01";33 > >>"2006-03-24 05:00:00+01";0<< > "2006-03-24 06:00:00+01";46 > "2006-03-24 07:00:00+01";63 > "2006-03-24 08:00:00+01";88 > > I used generate_series to get all the timestamps I need but I don't know how > to write my query. > > > I've tried various combination of subselects, joins, union,... and I never > managed to get the result I wanted... I'm sure the solution is trivial but I > don't get it... > I prefer to generate missing rows "on the fly" intead of actually storing > useless data on the table. More than likely you need a left join and a case statement. select , case when a.date is null then 0 else a.date end from (select * from generate_series() -- magic to get dates goes here) as p left join maintable as a on (p.date=a.date); There may be some small syntax error in there, as I've not tested it. The relavent pages are: case: http://www.postgresql.org/docs/8.1/static/functions-conditional.html joins: http://www.postgresql.org/docs/8.1/static/queries-table-expressions.html ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Flight numbers data
On Wed, 2006-03-29 at 02:17, Achilleus Mantzios wrote: > Hi, i am in the process of writing an application about > tickets, flights, etc, and i am thinking of getting the primitive > data ready at the begining and doing it the right way, > (e.g. the user will just select a flight number and doesnt > have to enter the airports, airport coordinates, countries, > airline name, airline code, departure/arrival times, etc...) > > I found some sources on the web about airlines (codes, names, countries, > etc...), also i found about airports, but nothing about flight numbers! That's cause companies that keep track of those things charge a LOT of money for their datasets. > This application will be for enterprise (internal) company use, > covering the flights of emploees around the world. > > I was just wondering how internet ticket agents manage to do it. They subscribe to some service that has this information in the back end. The data in that back end comes from the one or two sources of that data who charge yearly subscriptions in the hundreds of thousands of dollars. > Has anyone from the postgresql camp ever solved such a problem? Where I work, we're building a middle level system (look up the website that goes with my domain). And if we weren't in the airline reservation industry, we couldn't afford the data sets. > It is just that i wanna do it right, maybe play a little bit with AJAX > also who knows :) But "doing it right" goes against almost every tenet of the airline reservation industry :) haha. only serious. Seriously though, you might be able to get your travel agent or whoever you do reservations through to provide you with this information in some kind of reliable format for the tickets you've bought. If you can get some kind of automated feed from them, that would be the best way, if not, maybe they can email you the info each time, and you can cut and paste it in. There's much in the reservation industry that is basically a computer implementation of a 3x5 note card system. And none of those systems were originally built to talk to each other, so it's often impossible for a single user to get any kind of automatically created data from such an industry. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] How to copy data between joined columns?
On Fri, 2006-04-07 at 15:32, Bryce Nesbitt wrote: > I have a need to copy/update data from one column to another, based on a > join condition. Is this easy to do in pure SQL? I have google'ed > without luck for a easy solution (that's not Microsoft specific, that > is). For example: > > postgres=# > select xx_thing.date_start,xx_note.created_date > from xx_thing > join xx_issue using (thing_id) join xx_note using (issue_id) > where xx_note.created_date is not null; > > date_start |created_date > +- > 2006-03-17 | 1900-01-01 > (...many... rows) > > I need to copy date_start to created_date, for all records that meet a > particular where condition. If it's for the same rows in the same table, you're overworking it. update table set field1=field2 where somecondition; ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] LinkedList
On Wed, 2006-04-26 at 11:09, Ray Madigan wrote: > I have a table that I created that implements a linked list. I am not an > expert SQL developer and was wondering if there are known ways to traverse > the linked lists. Any information that can point me in the direction to > figure this out would be appreciated. The table contains many linked lists > based upon the head of the list and I need to extract all of the nodes that > make up a list. The lists are simple with a item and a link to the history > item so it goes kind of like: > > 1, 0 > 3, 1 > 7, 3 > 9, 7 > ... > > Any suggestions would be helpful, or I will have to implement the table > differently. You should be able to do this with a fairly simple self-join... select a.id, b.aid, a.field1, b.field1 from mytable a join mytable b on (a.id=b.aid) Or something like that. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[SQL] PL/PGSQL - How to pass in variables?
Using PL/PGSQL, I am trying to create a procedure to display the count of rows in any single table of a database. The End-user would pass in a table name and the prodecure would display the table name with the row count. I am able to hardcode the variable for table and get the appropriate results from my count function (see below), but cannot pass in a variable and have the function work. Any suggesstions??? CREATE FUNCTION get_table_count(tablename text) RETURNS integer AS $$ DECLARE --tablename ALIAS FOR $1; rowcount INTEGER; BEGIN SELECT INTO rowcount count(*) FROM tablename; RETURN rowcount; END; $$ LANGUAGE 'plpgsql';
Re: [SQL] Find min and max values across two columns?
On Mon, 2006-05-15 at 16:40, Emi Lu wrote: > Hello, > > I tried "select greatest(max(a), max(b)) from public.test", but I got > the following errors: > > ERROR: function greatest(integer, integer) does not exist > HINT: No function matches the given name and argument types. You may > need to add explicit type casts. > > May I know where I can read the docs about greatest & least please. > > I am using PostgreSQL 8.0.7. You need to define your problem better. Assuming these data: a b 12 24 24 12 13 18 25 10 which should I get for greatest(max(a), max(b)) should I get? Do you just want 25,24??? In that case you don't need greatest. Do you need the highest total between the two, or what? ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Am I crazy or is this SQL not possible
On Thu, 2006-06-01 at 14:47, Yasir Malik wrote: > > It is a hack, but when someone wants you to do something in a way > > different from the norm, aren't they asking for a hack? > > > > SQL Server does something like > > select top (1) from > > > > I am thinking this is NOT a SQL-99 standard. > > > This was an interview with Amazon, and I don't think Amazon wanted a hack. > I hope Amazon doesn't use hacks. There has to be another way. It would > be cruel if they expected me know some database specific functionality. > Here's what I gave them: > select value >from table t > where t.value > >(select t2.value > from table t2); > > which would be fine if the sub-select returned simply a list, but that > isn't even valid SQL (I didn't get the job, BTW). Sometimes the only reasonable answer is a hack. Much like in the older versions of PostgreSQL where select max(id) from sometable was slower than watching grass grow, the standard "hack" was to do select id from sometable order by id desc limit 1. If the SQL way kills your server and the hack doesn't, you use the hack and move on. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Advanced Query
On Tue, 2006-06-06 at 10:30, Richard Broersma Jr wrote: > > Personally: I think your posts are getting annoying. This isn't SQLCentral. > > Learn to write your own damn queries or even better - buy a book on SQL... > > Personally: (being a newbie with an interest in developing a strong rdms > skillset) I've enjoyed > following threads like these. Even when the questions (to some) seems overly > simplistic, the > courteous respondents often share insightful solutions or nuances that are > not found in an "off > the self" SQL book. > > However, if questions like these are *really* off-topic for the pgsql-sql I > would be interested in > knowing what kind of threads are acceptable and on-topic for this list. > > Also, if there are other mailing lists (pg or other) that are better suited > for threads like this, > I would appreciate learning of them. Personally, I too enjoy these threads. And when I find someone annoying (not you, by the way) I just ignore them. It's a modern world. there are these things called filters. I use them to ignore people that annoy me, rather than demanding they stop posting. Like my momma says, if you can't say anything nice, don't say anything at all. Keep posting. If I don't wanna read / answer your posts, I won't. If I've got 5 free minutes, I will. I suggest codeWarrier do the same. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] How to get list of days between two dates?
On Tue, 2006-06-06 at 16:17, Aaron Bono wrote: > Though there may be a more eligant way to do it, when we did things > like this in the past we created a function (or stored procedure) that > got the min and max dates and then created a result set that iterated > through the dates to create a virtual table of days. Then you can > inner join that list of days with your physical table. If you're on a version of pgsql with generate_series, you can use that to easily create a pivoting data set on the fly to do this. If you're on an older version, it's easy enough to make a generic pivot table and keep it around to do these things. Since we're still on 7.4 where I work, we use the pre-populated pivot table. It's primitive, but it works. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] empty set
On Thu, 2006-06-08 at 16:40, CG wrote: > PostgreSQL 8.1 > > I've been trying to write a SQL prepare routine. One of the challenging > elements I'm running into is an empty set ... > > "select foo from bar where foo in ? ;" > > What if "?" is an set with zero elements? What is the proper value to use to > replace "?" indicating an empty set? NULL? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: Fwd: [SQL] Start up question about triggers
On Mon, 2006-06-26 at 08:59, Forums @ Existanze wrote: > Hello again, > > The problem is not tracking WHAT changed, this can be done, as we have > discussed in this thread, the problem is how to replicate the necessary > commands that will alter a mirror database to reflect what has been changed, > sequencially and in the order that it has occurred. > > I you can find my first question (I can repost), you will see that this > problem has arisen due of lack of proper design. This is a requirement that > has just come into the picture and we have to find a solution for it. > > My initial question was intended to figure out a way to track these changes > AND create the necessary INSERT,DELETE and UPDATE statements for each of the > changes that occurr in the entire database. I wanted to avoid having to > create an audit table for each of the tables in the database. At the moment > we are counting 82, this will mean another 82 tables, along with 82 triggers > and so on. I'm having a mad scientist moment. I bet it would be possible to hack pgpool to do this. Just have it shoot all the queries that come in to it at the normal database, AND at a text file or something like that. Not sure that's any better than using pgsql logging to do the same thing, but it certainly doesn't seem like it would be that hard to do. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
