[SQL] New operators

2000-09-15 Thread Kyle

I have a front end that builds queries based on a user selection of:
  Match type (=, <, >, ~, ~~, etc)
  Ignore/honor upper/lower case
  Negate the comparison

I ended up putting the following operators in the backend so the
operators all
had parallel types of case/no-case options.  This made the front end
code much simpler.

Would it be helpful to others to make these built-in operators?


-- Case insensitive operators (for consistency with like operators)
create function likenocase(text,text) returns boolean as '
begin return upper($1) like upper($2); end;' language 'plpgsql' with
(iscachable);
create operator ~~* (leftarg = text,rightarg = text,procedure =
likenocase, negator = !~~*);
create function nlikenocase(text,text) returns boolean as '
begin return upper($1) not like upper($2); end;' language 'plpgsql'
with (iscachable);
create operator !~~* (leftarg = text,rightarg = text,procedure =
nlikenocase, negator = ~~*);

create function eqnocase(text,text) returns boolean as '
begin return upper($1) = upper($2); end;' language 'plpgsql' with
(iscachable);
create operator =* (leftarg = text,rightarg = text,procedure =
eqnocase, negator = !=*);
create function neqnocase(text,text) returns boolean as '
begin return upper($1) != upper($2); end;' language 'plpgsql' with
(iscachable);
create operator !=* (leftarg = text,rightarg = text,procedure =
neqnocase, negator = =*);



begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



[SQL] Bug or feature

2000-11-20 Thread Kyle


Here's an interesting test of referential integrity.  I'm not sure
if this is working the way it should or if it is a bug.
I'm trying to update the primary key in records that are linked together
from the two different tables.  My initial assumption was that because
of the cascade, I could update the primary key only in the gl_hdr table
and it would cascade to the gl_items table.  I have two separate updates
of gl_items shown below.  One updates the key in gl_items explicitly,
the other tries to wait and allow the cascade to do it.  Only the
first one works (try commenting one in/out at a time).
Unless I update the glid explicitly in gl_items, I get an RI violation
when it tries to update the gl_hdr record.
 
--Test RI in the general ledger
drop table gl_hdr;
drop table gl_items;
create table gl_hdr (
    glid   
int4,
    hstat   varchar(1),
    constraint gl_hdr_pk_glid primary key (glid)
);
create table gl_items (
    glid   
int4,
    inum   
int4,
    istat   varchar(1),
    primary key (glid, inum),
    constraint gl_items_fk_glid
    foreign key (glid) references
gl_hdr
   
on update cascade
   
deferrable initially deferred
);
insert into gl_hdr (glid,hstat) values (1,'w');
insert into gl_items (glid,inum,istat) values (1,1,'w');
insert into gl_items (glid,inum,istat) values (1,2,'w');
select * from gl_hdr h, gl_items i where h.glid = i.glid;
begin;
--This one works:
--  update gl_items set glid = 1000,
istat = 'c' where glid = 1;
--This one doesn't:
    update gl_items set
istat = 'c' where glid = 1;
 
 
    update gl_hdr  
set glid = 1000, hstat = 'c' where glid = 1;
end;
select * from gl_hdr h, gl_items i where h.glid = i.glid;
 
 

begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



[SQL] Problem with subquery joined to a view

2000-11-21 Thread Kyle


I assume this is a bug of some kind.  Here's a neat little example
to demonstrate it.  If it helps make 7.1 better, great.
If I've done something wrong, I'd like to figure that out too if anyone
can help.
This forms a bunch of hypothetical payroll entries and then makes a
view which aggregates them by some arbitrary time period.  (I've used
a function which pulls out the month to group by.)  Then I do a query
on the view with a subquery as one of the fields that pulls out only a
subset of the entries in the group.
The third "select" uses a function to create the subquery.  This
works (and is my current work-around).
The fourth "select" uses a regular subquery.  It gives the error:
psql:datbug.sql:44: ERROR:  Sub-SELECT uses un-GROUPed attribute
pay_req.wdate from outer query
 
drop table pay_req;
drop view pay_req_v;
drop function sumr(int4,text,text);
set DateStyle to 'ISO'
create table pay_req (
    empl_id int4,
    wdate   date,
    type   
varchar,
    hours   float8
not null check (hours >= 0),
    primary key (empl_id, wdate)
);
create view pay_req_v as
    select empl_id,substr(text(wdate),6,2) as month,sum(hours)
as hours from pay_req group by 1,2;
create function sumr(int4,text,text) returns float8 as '
    select coalesce(sum(hours),0) from pay_req where
empl_id = $1 and type = $2 and substr(text(wdate),6,2) = $3;
' LANGUAGE 'sql';
 
insert into pay_req (empl_id,wdate,type,hours) values (1000,'2000-Jan-01','r',4);
insert into pay_req (empl_id,wdate,type,hours) values (1000,'2000-Jan-02','r',5);
insert into pay_req (empl_id,wdate,type,hours) values (1000,'2000-Jan-03','o',6);
insert into pay_req (empl_id,wdate,type,hours) values (1001,'2000-Jan-01','r',2);
insert into pay_req (empl_id,wdate,type,hours) values (1001,'2000-Jan-02','r',3);
insert into pay_req (empl_id,wdate,type,hours) values (1001,'2000-Jan-03','o',4);
select * from pay_req order by empl_id,wdate,type,hours;
select * from pay_req_v order by empl_id,month,hours;
select v.empl_id,month,hours,
    sumr(v.empl_id,'r',v.month)
as "type-r"
    from pay_req_v v where
    v.empl_id = 1000 and
    v.month = '01'
;
select v.empl_id,month,hours,
    (select coalesce(sum(r.hours),0)
from pay_req r where r.type = 'r' and r.empl_id = v.empl_id and substr(text(r.wdate),6,2)
= v.month) as "type-r"
    from pay_req_v v where
    v.empl_id = 1000 and
    v.month = '01'
;
 
 

begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



[SQL] Re: "drop constraint trigger" on PostgreSQL

2000-12-08 Thread Kyle



Bryan Field-Elliot wrote:

Searching the archives,
it seems you once tried to find out how to "drop constraint trigger" on
PostgreSQL; did you ever figure it out? Thank
you,Bryan

I wrote the following function and installed it in my database. 
I can then call it with a select (yuk) to drop a constraint trigger:
-- Drop all constraint triggers with a given constraint name
-- calling sequence: drop_contrig(constrname)
create function drop_contrig(text) returns text as '
    set d(tgname) {}
    spi_exec -array d "select c.relname,t.tgname
from pg_class c, pg_trigger t where c.oid = t.tgrelid and tgconstrname
= \'$1\'" {
    spi_exec "drop trigger
\\"$d(tgname)\\" on $d(relname)"
    }
    if {$d(tgname) == {}} {return "No constraint
trigger $1 found"}
    return "Drop trigger $d(tgname) on $d(relname)"
    ' LANGUAGE 'pltcl';
 
 



begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



[SQL] Strange slow behavior in backend

2000-12-13 Thread Kyle


I'm using 7.0.1 with a TCL frontend.
I have a schema that splits large files into tuple-sized bites and stores
them in a table.  This was done before TOAST in order to store large
files.
I have a backend TCL function that re-assembles the file like this:
-- Fetch the specified document data, reassembling the bits back
together
-- in the right order.
-- Calling sequence: cont_doc_fetch(crt_by,crt_date,ctype)
create function cont_doc_fetch(int4,timestamp,varchar) returns
text as '
    set odata {}
    spi_exec -array d "select data from cont_doc_data
where crt_by = \'$1\' and crt_date = \'$2\' and ctype = \'[quote $3]\'
order by seq" {
    append odata $d(data)
    }
    return $odata
    ' LANGUAGE 'pltcl';
This worked great until I put a real big file in (about 5M).  Then,
when I tried to fetch the file, it seemed really slow (about 60 seconds). 
I tried reassembling the file in the frontend instead and my time dropped
to about 6 seconds using this TCL fragment (mpg::qlist is an interface
to pg_exec that returns a list of tuples):
    set data {}
    set tuple_list [mpg::qlist "select data from
$ca(prefix)_doc_data where crt_by = $crt_by and crt_date = '$crt_date'
and ctype = '$ctype' order by seq"]
    foreach rec $tuple_list {
    append data [lindex
$rec 0]
    }
The only difference I can identify is whether the re-assembly TCL code
is running as a procedural language (backend) or in the frontend. 
Anyone have any idea why the difference is so dramatic?
Jan:
Is this the difference between old TCL and new TCL (with multi-port
objects)?  Or is there something else about the way the backend handles
large chunks of data that would mark the difference?


begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



[SQL] Rule not invoked in 7.1

2001-01-24 Thread Kyle


I have a number of views that seemed to work fine in 7.0.3.  When
I try to do an update to these views under 7.1, I get the following
error:
ERROR:  Cannot update a view without an appropriate rule.
For example, there's a view that looks like this:
create view pay_req_v_prl as select empl_id,wdate,seq,hours,hot,proj,entby,paytyp,status,poinum,added,rate,otrate,appby,gross,rgross,cost,ttype,expnum,oid
as _oid from pay_req
create rule pay_req_v_prl_update as on update to pay_req_v_prl
    where old.status = 'appr' do instead
    update pay_req set status = new.status, gross
= new.gross, cost = new.cost,
    ttype = new.ttype, expnum = new.expnum, rgross
= new.rgross, hot = new.hot
    where empl_id = old.empl_id and wdate = old.wdate
and seq = old.seq;
The sql looks like this:
update pay_req_v_prl set gross = 90.09 where empl_id = 1010 and
wdate = '2001-01-08' and seq = 1;
The record it should update does seem to have status = 'appr' and it
updates fine on my 7.0.3 box.
Any ideas?


begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



Re: [SQL] Rule not invoked in 7.1

2001-01-25 Thread Kyle


Jan Wieck wrote:
Tom Lane wrote:
> Kyle <[EMAIL PROTECTED]> writes:
> > ERROR:  Cannot update a view without an appropriate rule.
>
> 7.1 insists that you provide an *unconditional* DO INSTEAD rule
> for a view.  What do you think was happening on your old database
> when the "where old.status = 'appr'" clause wasn't satisfied?
> Nothing good I'm afraid.
    No  harm  in the UPDATE case, because so
far there aren't any
    tuples in the view  that  could 
be  affected  by  the  still
    executed original query.  But in an INSERT
case, it would let
    tuples through into the views heap file.
> If you really do need conditional rules, you can satisfy the check
> by writing one unconditional DO INSTEAD NOTHING rule and then one
> or more conditional non-INSTEAD rules.  But you should think
carefully
> about what you expect to happen when you use a conditional rule.
 

I'm using the view as a way of restricting a single class of users
to only update tuples that have a certain status in the table.  Isn't
this essentially what a "dynamic view" is?
If someone happens to know the primary key of a record they should not
be able to access, and they try to update it, I would like the backend
to ignore the query (or better yet, raise an exception but I haven't figured
out how to do that).  If the status is correct, the update should
proceed.
I've inserted the dummy do nothing rule as follows:
create view pay_req_v_prl as select empl_id,wdate,seq,hours,hot,proj,entby,paytyp,status,poinum,added,rate,otrate,appby,gross,rgross,cost,ttype,expnum,oid
as _oid from pay_req;
create rule pay_req_v_prl_upnull as on update to pay_req_v_prl do
instead nothing;
create rule pay_req_v_prl_update as on update to pay_req_v_prl
    where old.status = 'appr' do instead
    update pay_req set status = new.status, gross
= new.gross, cost = new.cost,
    ttype = new.ttype, expnum = new.expnum, rgross
= new.rgross, hot = new.hot
    where empl_id = old.empl_id and wdate = old.wdate
and seq = old.seq;
This seems to work now when I do:
psql ati -c "update pay_req_v_prl set gross = 90.09  where
empl_id = 1010 and wdate = '2001-01-08' and seq = 1;"
You see any problems with this method?
BTW, the update still returns UPDATE 0 from psql even though a record
was updated.  I've never quite figured out why views with rules do
this.
I've also done some testing on 7.1 for that nasty thing in 7.0 where
you had to give select,update privs to a table referenced by a foreign
key.  So far, looks good.  I was able to reference a table that
the user didn't have privs to at all.  I think that is the desired
behavior.
Good work guys!  7.1 is looking good.
 
 

begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



Re: [SQL] Rule not invoked in 7.1

2001-01-26 Thread Kyle

Tom Lane wrote:

> Kyle <[EMAIL PROTECTED]> writes:
> > If someone happens to know the primary key of a record they should not be
> > able to access, and they try to update it, I would like the backend to
> > ignore the query (or better yet, raise an exception but I haven't figured
> > out how to do that).  If the status is correct, the update should proceed.
>
> This might be better done with a trigger than a rule.  For one thing,
> a trigger can easily raise an exception.  MHO is that rules are good
> when you need to update multiple rows in other tables when certain
> things happen.  If you just want to validate or twiddle an individual
> tuple as it's inserted/updated, a trigger is a good bet.
>

The underlying table contains payroll data.  Each employee should be able to
enter payroll
requests, but there is a very strict set of rules about which rows he should be
able to access,
how he can access them, and when.

For example, an employee can enter new time records, but once the records have
been
approved or paid, he can no longer modify them.

I have set up several views that allow access to the rows depending on their
status.  For example,
employees only have access to "working records."  Once they are satisfied with
the data they
have entered, they change the status to "open" at which point they can no
longer edit it.

Supervisors then have access to the record and can approve it, changing its
status to "approved"
and so on.

The problem I had with trying to use a trigger was that the trigger fires on
the underlying table,
regardless of which view the user comes in on.  (At least it seemed that way
when I tested it.)
I need to apply a different set of rules based on which view the user is coming
in on--not simply
who the user is.

Is there a way to configure a trigger to fire on a view rather than the
underlying table?  I tried linking
a trigger to a view but it seemed to not get called at all.  I assumed this was
because the rewriting
rules were directing the query away from the view class and so the trigger was
never getting called
at all.



begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



Re: [SQL] Rule not invoked in 7.1

2001-01-26 Thread Kyle


Tom Lane wrote:
Kyle <[EMAIL PROTECTED]> writes:
> If someone happens to know the primary key of a record they should
not be
> able to access, and they try to update it, I would like the backend
to
> ignore the query (or better yet, raise an exception but I haven't
figured
> out how to do that).  If the status is correct, the update should
proceed.
This might be better done with a trigger than a rule.  For one
thing,
a trigger can easily raise an exception.  MHO is that rules are
good
when you need to update multiple rows in other tables when certain
things happen.  If you just want to validate or twiddle an individual
tuple as it's inserted/updated, a trigger is a good bet.
 
I have another interesting use of this same concept you may be interested
in (or slightly nausious, as the case may be):
The underlying database (empl) contains all employees in the organization. 
The view empl_v_sup calls a recursive function to determine if
a given employee works for the current user (either directly, or anywhere
under him in the company heirarchy).  The view only includes
employees that work under the that user.  There is also an exception
for users who have certain types of privileges who get to see the whole
company.
This dynamic view is very cool as it allows different people to see
different data in the same view depending on who they are, and how the
hierarchical data is arranged in the employee database.
-- Determine if an employee has another employee as his supervisor.
-- An employee is, by definition, not his own supervisor
-- Returns true or false
-- calling sequence: _empl_ancest(employee,ancestor,level)
create function _empl_ancest(int4,int4,int4) returns boolean as
'
    declare
    trec record;
    begin
    if $3 > 15 then
   
raise exception \'Supervisor loop found on employee %\', $1;
    end if;
-- a person can not be his own supervisor
-- also if null or 0, we reached top of the ladder so return false
    if $1 = $2 or $1 is
null or $1 = 0 then return false; end if;
 
-- get the employees record
    select * into trec from
empl_pub where empl_id = $1;
    if not found then
   
raise exception \'Record not found for employee %\', $1;
    end if;
-- if he is his own supervisor, we have probably reached the top
so false
    if trec.superv = $1
then return false; end if;
-- if his supervisor is the ancestor, return true
    if trec.superv = $2
then return true; end if;
-- else check the parent recursively
    return _empl_ancest(trec.superv,
$2, $3+1);
    end;' language 'plpgsql';
-- Determine if an employee has another employee as his ancestor.
-- This adds a level parm to prevent infinite recursion.
-- calling sequence: empl_ancest(employee,ancestor)
create function empl_ancest(int4,int4) returns boolean as '
    select _empl_ancest($1,$2,0);
    ' language 'sql';
--View with limited privileges for supervisors to see their own
people
create view empl_v_sup as select *,oid as _oid from empl where
    exists (select * from priv where empl_id = getpguid()
and ((priv = 'emplim' and alevel = 'super') or (priv = 'payroll'))) or
    empl_ancest(empl_id,getpguid());
--Only the emplim-super can insert records
create rule empl_v_sup_innull as on insert to empl_v_sup do instead
nothing;
create rule empl_v_sup_insert as on insert to empl_v_sup where
    (select count(*) from priv where priv = 'emplim'
and alevel = 'super' and empl_id = getpguid()) > 0
    do instead
    insert into empl (empl_id,pertitle,surname,givnames,prefname,jobtitle,addr,city,state,zip,country,phone,workph,mobile,email,ssn,bday,hiredate,termdate,lrevdate,nrevdate,paytyp,empltyp,superv,proxy,status,mstat,payrate,allow,wccode,eic,cmt)
    values (new.empl_id,new.pertitle,new.surname,new.givnames,new.prefname,new.jobtitle,new.addr,new.city,new.state,new.zip,new.country,new.phone,new.workph,new.mobile,new.email,new.ssn,new.bday,new.hiredate,new.termdate,new.lrevdate,new.nrevdate,new.paytyp,new.empltyp,new.superv,new.proxy,new.status,new.mstat,new.payrate,new.allow,new.wccode,new.eic,new.cmt);
--Emplim-super can update any field
create rule empl_v_sup_upnull as on update to empl_v_sup do instead
nothing;
create rule empl_v_sup_update as on update to empl_v_sup where
    (select count(*) from priv where priv = 'emplim'
and alevel = 'super' and empl_id = getpguid()) > 0
    do instead
    update empl set empl_id = new.empl_id, pertitle
= new.pertitle, surname = new.surname, givnames = new.givnames, prefname
= new.prefname, jobtitle = new.jobtitle, addr = new.addr, city = new.city,
state = new.state, zip = new.zip, country = new.country, phone = new.phone,
workph = new.workph, mobile = new.mobile, email = new.email, ssn = new.ssn,
bday = new.bday, hiredate = new.hiredate, termdate = new.termdate, lrevdate
= new.lrevda

Re: [SQL] Rule not invoked in 7.1

2001-01-26 Thread Kyle

Jan Wieck wrote:

> Tom Lane wrote:
>
> > This might be better done with a trigger than a rule.  For one thing,
> > a trigger can easily raise an exception.  MHO is that rules are good
> > when you need to update multiple rows in other tables when certain
> > things happen.  If you just want to validate or twiddle an individual
> > tuple as it's inserted/updated, a trigger is a good bet.
>
> But  the  trigger  aproach requires access permissions to the
> base table in the first place, and exactly that's  what  Kyle
> want to restrict.

That's right.

> Kyle, I doubt if you need the condition in the update rule at
> all.  As far as I understood, your view  restricts  what  the
> user can see from the base table. This restricted SELECT rule
> is applied to UPDATE events as well, so the UPDATE can  never
> affect rows which are invisible through the view.

This hadn't occurred to me but makes sense now that you say it.  Taking that into
consideration will
make my job a bit simpler.

The only complication is
that there are a class of records which the user should be able to view, but not
modify.  For example,
the employee can create and modify working records as long as the only
modification to their status
is to move them on to "open status" (creating an "approved" record would be a bad
idea.)

But the user should be able to view all their records (working, open, approved,
and even paid).

Hence, the restrictions on update are more stringent than those on select.



begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



[SQL] Debug messages in beta5

2001-02-27 Thread Kyle
minum = l.minum
   
and m.bminum = l.bminum) as lquant,
  (select
p.base||'('||p.parm||')' from prd_part_v_base p where p.pnum = m.pnum)
as pname,
  (select
p.descr from prd_part_v_base p where p.pnum = m.pnum) as descr
   from mtr_reg_v_or m, ord_hdr o
   where m.status = 'open'
   and m.ropnum = o.ordnum
   and o.type = 'ord'
   and m.rquant >
   (select coalesce(sum(l.quant),0) from pak_lnk l
  where m.ttype = l.ttype
  and m.ropnum = l.ropnum
  and m.inum = l.inum
  and m.minum = l.minum
  and m.bminum = l.bminum);
 

begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



[SQL] Re: Daily Digest V1 #282

2001-02-28 Thread Kyle

>
> create trigger run_changed_tr after update on runs for each row
>execute procedure run_changed();
>
> BUT, when I update the table, I get:
>ERROR:  parser: parse error at or near "$1"
>
> It looks like the *name* (or it's alias here: $1.run_name), not the *value* of the 
>variable nm,
> is passwd to the notify command.  Since notify only takes a name, not a string,
> I don't see how to proceed.
>
> Is there some way in plsql to construct a string and have it executed in sql?
>
> disappointed in plsql,
> George
>

I don't know much about notify, but I know you can build arbitrary SQL strings in 
PL/TCL.  Have you
tried that?

I use a combination of plpgsql and pltcl in my implementation because each one has it
strengths/weaknesses.



begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



[SQL] A query that doesn't work on 7.1

2001-03-07 Thread Kyle


Here's a query that doesn't work on 7.1.  Is this a bug or am I doing
something wrong?
The last two selects yield:
ERROR:  Sub-SELECT uses un-GROUPed attribute m1.ropnum from outer
query
Basically, everything works until I apply the avg() function and try
to aggregate the results.
 
drop table mtr;
create table mtr (
    ttype   varchar(2),
--record type
    ropnum  int4,  
--order number
    minum   int4,  
--item number
    pnum   
varchar(18),
    tdate   date,
    primary key (ttype,ropnum,minum)
);
insert into mtr (ttype,ropnum,minum,pnum,tdate) values ('po',1000,1,2000,'2001-Jan-30');
insert into mtr (ttype,ropnum,minum,pnum,tdate) values ('wm',1000,1,2001,'2001-Jan-10');
insert into mtr (ttype,ropnum,minum,pnum,tdate) values ('wm',1000,2,2002,'2001-Jan-12');
insert into mtr (ttype,ropnum,minum,pnum,tdate) values ('wm',1000,3,2003,'2001-Jan-14');
insert into mtr (ttype,ropnum,minum,pnum,tdate) values ('po',1001,1,2000,'2001-Feb-28');
insert into mtr (ttype,ropnum,minum,pnum,tdate) values ('wm',1001,1,2011,'2001-Feb-01');
insert into mtr (ttype,ropnum,minum,pnum,tdate) values ('wm',1001,2,2012,'2001-Feb-02');
insert into mtr (ttype,ropnum,minum,pnum,tdate) values ('wm',1001,3,2013,'2001-Feb-03');
--The finish date is represented by the tdate of a po type record
--The start date is found by the earliest of the wm type records
with the same ropnum,minum fields
--This lists the start and finish dates
select
    (select min(tdate) from mtr where ttype = 'wm'
and ropnum = m1.ropnum) as start,
    m1.tdate as finish
    from mtr m1 where
    m1.ttype = 'po' and
    m1.pnum = '2000'
;
--Now I try to find the average number of days between start and
finish for the part
select
    avg(date_part('day',(start::datetime - finish::datetime)::timespan))
from
    (select
    (select min(tdate) from
mtr where ttype = 'wm' and ropnum = m1.ropnum) as start,
    m1.tdate::datetime as
finish
    from mtr m1 where
    m1.ttype = 'po' and
    m1.pnum = '2000'
    ) as dates
;
--Here I try a different method
-- select
select
    avg(date_part('day',((select min(tdate) from
mtr where ttype = 'wm' and ropnum = m1.ropnum)::datetime - m1.tdate::datetime)::timespan))
    from mtr m1 where
    m1.ttype = 'po' and
    m1.pnum = '2000'
;
 
 

begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] A query that doesn't work on 7.1

2001-03-08 Thread Kyle


Tom Lane wrote:
> Here's another twist though.  Is this a bug
too or is this just beyond our reach?
> psql:lead1.sql:64: ERROR:  Unable to select an aggregate function
avg(date)
It's just that we don't have any avg() function for date --- nor for
timestamp, which is a little more surprising.
 
FYI:
I got by with kind of a pseudo average (mean, I guess) for now implemented
as:
min(date) + (max(date) - min(date)/2)
 
 
You could probably gin up a usable avg(timestamp) using the avg(float8)
routines, since a timestamp is really just a double under the hood.
 
When you say "gin up" are you talking about C, PL/XXX, or just casts?
 
 
BTW, here's another question:
Here's a view of a union.  The two selects are fast when executed
individually (and explain confirms that they use indexes).  When I
query the view, though, it is slow (explain says the scans are sequential).
Is this expected or a bug?
-- Simulate a table of lead times
create view vend_v_lead as select
    p.pnum as pnum,'vend' as type,ldtime::float8
as lead,0 as aging from vend_price v, prd_part p where v.pnum = p.pnum
    union select
    p.pnum,'hist',date_part('day',(m.tdate::datetime-m.mtr_date::datetime)::timespan),0
from mtr_reg m, prd_part p where m.pnum = p.pnum and m.ttype = 'po' and
m.status = 'clsd'
;
 
 

begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] Possible 7.1RC1 bug

2001-03-29 Thread Kyle


Here's a fragment of code that works on 7.0.3 but gives a type mismatch
on 7.1:
Under 7.1RC1, func1 fails and func2 builds OK.  The error is:
ERROR:  return type mismatch in function: declared to return int4,
returns numeric
It appears that sum(int4) returns type numeric.  Shouldn't it return
int4?
 
create table ints (
    inte   
int4
);
create function func1() returns int4 as '
    select coalesce(sum(inte),0) from ints;
' language 'sql';
create function func2() returns int4 as '
    select coalesce(sum(inte)::int4,0) from ints;
' language 'sql';
 
 

begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Query broken under 7.1RC2

2001-04-04 Thread Kyle


This query worked under 7.0.3 but yields an error under 7.1RC1 or RC2.
The error message is:
psql:outbug.sql:43: ERROR:  Sub-SELECT uses un-GROUPed attribute
h.ordnum from outer query
Tom, you patched util/clauses.c (near line 540) a while back to prevent
the same error message on a different query.  This may be related.
The SQL is a little strange because the subquery tries to reference
individual records from the outer query and then sum them.
The more I look at it, I wonder if it is not just bad SQL. But the
last one I found like this turned out to reveal a bug, so here it is:

drop table ord_hdr;
drop table cinv_hdr;
drop table cinv_items;
drop table inc_link;
create table ord_hdr (
    ordnum  int4,
    proj   
int4
);
 
create table cinv_hdr (
    ordnum  int4,
    hinum   int4,
    status  varchar
);
create table cinv_items (
    ordnum  int4,
    hinum   int4,
    quant   int4,
    unit_price  numeric(12,2)
);
create table inc_link (
    ordnum  int4,
    hinum   int4,
    amount  numeric(12,2)
);
select sum(i.quant*i.unit_price::float8),
   (select coalesce(sum(amount),0) from inc_link where
ordnum = h.ordnum and hinum = h.hinum)
    from cinv_hdr h, cinv_items i, ord_hdr o where
    o.ordnum = h.ordnum and
    h.ordnum = i.ordnum and
    h.hinum = i.hinum and
    o.proj = 1051 and
    h.status = 'open'
;

begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] Project Development

2001-04-05 Thread Kyle

Josh Berkus writes:

>That does bring up a related question:  when are we gonna get
DROP
>COLUMN capability?  Currently my tables are littered with unused
columns
>because I can't remove them without blowing my referential integrity
and
>views to heck.

Josh:

I've done some work on a solution to allow you to drop and rebuild your
db from scratch relatively painlessly.  It enables you to munge portions
of the database, drop columns, undermine RI, clobber views, etc. and
then put the whole thing back together clean, fresh and perfect.
Pg_dump is focused on doing a drop/restore exactly.  What is needed is a
way to drop/modify/restore.  That's kind of what my tool enables you to
do (without the fear that you're going to lose some subtle detail of the
schema in the process).

It's a management tool that sits in between Postgres and your
application.  You store chunks of SQL in a relational format that
describe your schema exactly.  Then you can call those bits of SQL on
demand to destroy or build parts of the schema selectively.  Because its
relational, you can store dependencies so the program knows which parts
of the DB to rebuild when you clobber something.

Right now, it is a handful of shell scripts and files, but we're using
it with good success at ATI.  I have created an outline of how to do it
with a tcl/tk GUI and actually storing the SQL chunks in a Postgres
database.

I want to do some more development on it and then do a GPL release of
it.  Problem is, I don't seem to find the time to get it all done.  Are
you interested in collaborating on its development?  Or do you know
someone who is?



BTW:
In addition to the management tool, I have also developed a complete
library that allows one to build an ERP around Postgres very quickly.
For example, a typical contact manager can be written in about 300-500
lines of code.  A much more powerful version can be written in about
twice that much code.

Using the library, we have built a major chunk of our ERP which
includes: product design and specification, inventory management, order
entry, contact management, employee records, AP, payroll, purchasing,
shipping, etc.

Given the right conditions, I'd be willing to GPL this too.  Anyone
interested in working on it?



begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] Update

2001-04-07 Thread Kyle

OK, I figured it out.  It wasn't the sum(int4) thing...

I have a query that looks like this:

select pnum from part where func1(pnum) and func2(pnum);

Func1 takes less time to execute than func2.  I was using func1 to
"narrow the field" of records so the query would not take so long to
execute.  After upgrading to 7.1 the query got real slow.  After
changing the query to:

select pnum from part where func2(pnum) and func1(pnum);

The query went back to its normal time.

It appears that the first function would get evaluated first under 7.0.3
but the last function gets evaluated first under 7.1.  Is that accurate?

Is there a way to control which functions are given precidence?

Kyle



begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] 7.1 grant/revoke speed

2001-04-07 Thread Kyle

Any reason why I should expect grants and/or revokes to be slower under
7.1RC3?

I have a script that grants all our privileges and it takes about 5 to
10 times longer to run than it did under 7.0.3.



begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] Re: Update

2001-04-13 Thread Kyle

Tom Lane wrote:

> Kyle <[EMAIL PROTECTED]> writes:
> > It appears that the first function would get evaluated first under 7.0.3
> > but the last function gets evaluated first under 7.1.  Is that accurate?
>
> Actually, I was under the impression that (all else being equal) WHERE
> clauses would get evaluated right-to-left in 7.0.* as well.  I was
> meaning to figure out where the clause order reversal is happening and
> undo it, but didn't get around to it for 7.1.
>
> > Is there a way to control which functions are given precidence?
>
> Nope, not at the moment.  The code has remnants of a notion of cost of
> evaluation for functions, but it's not being used for anything ...
>

Might be interesting to have something like:

create function ... as ... with cost=x;

It would also be cool to supply a default set of parameters to the function.
Then "vaccum analyze" or some such thing could execute the functions, time
them, and store cost data internally...

create function ... as ... with default(3,7,4);
or
create function myfunc (int4 3, float8 7, numeric 4) ...;

BTW, great job on 7.1!  Kudos to all the developers who are working so hard
to make it happen.



begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] RI permission problem

2001-04-25 Thread Kyle


Here's an interesting security problem:  Suppose I create two tables:
create table a (
   pk    int4 primary key,
   aval    text
);
create table b (
    fk    int4 references a (pk)
on update cascade,
    bval
);
Then I grant a user update to table a but not to table b.  He should
be able to modify all the values in a freely, including the primary key
(in practice it is an invoice number that often gets entered incorrectly
and must be corrected.)
But the user should not have update privilege to table b (most particularly,
bval is privileged).  But I would like the user to be able to update
the primary key and hence cascade the update to table b.
Is there a way to get this to work without granting update to table
b?
Tom, I understand someone was working on setuid functions.  Is
that a long way off?  It would be nifty if triggers could execute
with the privileges of the user that created them rather than the user
who is executing them.  This would help greatly in closing some security
holes like this we are dealing with.
Kyle Bateman
 

begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] RI permission problem

2001-04-25 Thread Kyle

Peter Eisentraut wrote:

> Kyle writes:
>
> > Is there a way to get this to work without granting update to table b?
>
> Update to 7.1.]

I'm on 7.1.  Should an RI trigger under 7.1 run as the DBA or as the current
user?

>
>
> > Tom, I understand someone was working on setuid functions.  Is that a
> > long way off?  It would be nifty if triggers could execute with the
> > privileges of the user that created them rather than the user who is
> > executing them.  This would help greatly in closing some security holes
> > like this we are dealing with.


begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] RI permission problem

2001-05-07 Thread Kyle


Peter Eisentraut wrote:
Kyle writes:
> Peter Eisentraut wrote:
>
> > Kyle writes:
> >
> > > Is there a way to get this to work without granting update to
table b?
> >
> > Update to 7.1.]
>
> I'm on 7.1.  Should an RI trigger under 7.1 run as the DBA or
as the current
> user?
Okay, we missed a few cases.  Try the attached patch.
 
OK, here's another similar one.  Should this work?  (sorry there's
really a little more here than you absolutely need, but it demonstrates
the problem)
drop view atab_v1;
drop view atab_v2;
drop view atab_v3;
drop view atab_v4;
drop table atab;
drop function func_atab ();
drop function func_v1 ();
drop function func_v2 ();
create table atab (
    f1  int4
);
insert into atab (f1) values (1);
insert into atab (f1) values (2);
insert into atab (f1) values (3);
create view atab_v1 as select * from atab;
create view atab_v2 as select * from atab;
create function func_atab () returns numeric as '
    select sum(f1) from atab;
    ' language 'sql';
create function func_v1 () returns numeric as '
    select sum(f1) from atab_v1;
    ' language 'sql';
create function func_v2 () returns numeric as '
    select sum(f1) from atab_v2;
    ' language 'sql';
create view atab_v3 as select *,func_v1() from atab_v2;
create view atab_v4 as select *,func_atab() from atab_v2;
grant select on atab_v2 to kyle;
grant select on atab_v3 to kyle;
grant select on atab_v4 to kyle;
Now as user Kyle, try to select from atab_v3 or atab_v4.  Both
give permission denied because no explicit permission is given to the view/table
underlying the summing function.
Shouldn't the select access to the view trickle down to subordinate
select functions?
Kyle
 

begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL] Setting session global variables

2001-05-17 Thread Kyle




 
Is there a way to set a session global variable in PostgreSQL? 
The only
thing I see are examples of setting config variables.
Not sure if session variables are your best solution, but here's how you
implement them:
-- Warning: this defines a global variable in the tcl interpretor. 
This could
-- crash with other TCL procedures if the same name were used during
the same connection.
-- Store a value in a variable.  This is helpful for caching
values in a transaction
-- Calling sequence: store(variable,value)
create function store(text,text) returns int4 as '
    global store_vars
    return [set store_vars($1) $2]
    ' LANGUAGE 'pltcl';
-- Fetch a value from a variable.
-- Calling sequence: recall(variable)
create function recall(text) returns int4 as '
    global store_vars
    return [subst $store_vars($1)]
    ' LANGUAGE 'pltcl';
 
 



begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Listing Users

2003-03-19 Thread Kyle
What is the SQL command to list users?

CREATE USER, DROP USER, ALTER USER,   USER

I just can't seem to find the command to list them.

-Kyle

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[SQL] Trigger on view

2003-09-17 Thread Kyle
I'm trying to set up a trigger that restricts when and how updates are 
done to the records in a relation.  However, when the administrator 
accesses the relation, I want the restriction to be relaxed.

My approach was to have a view that is accessed by the normal users and 
have the admin access the table directly.  However, it doesn't appear 
that I can actually apply a trigger to a view.

Some of this I can do with rules on the view, but certain restrictions 
are applicable only on a row-by-row basis, so I think I need to do it in 
a BEFORE trigger.

Is there a way I can either:

1. Apply a trigger to a view.  The trigger would actually get registered 
with the underlying table, but would only be invoked if the user had 
entered via the view.

2. Detect inside the trigger function whether the user had accessed the 
table directly or through the view so I could invoke conditional code 
accordingly.

I want to avoid hardcoding user ID's in the trigger, so I'm trying to 
take this approach so it can all be handled simply by granting 
privileges on different views to enable/disable the varying levels of 
restriction.

Any ideas?

Kyle Bateman



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[SQL] Getting FK relationships from information_schema

2004-06-07 Thread Kyle
pl_pkey  | NONE | NO ACTION   | 
NO ACTION
$1  | empl_pkey  | NONE | NO ACTION   | 
NO ACTION
$2  | empl_pkey  | NONE | NO ACTION   | 
NO ACTION
(5 rows)

I can determine all the primary key fields nicely, and I can tell what 
fields are foreign keys.  The problem is, I can't determine where the 
foreign keys are pointing.  The problem is, the constraint names ($1, 
$2, etc.) are not unique so I don't know how to join the third query 
into the fourth.

For example, if I want to know about the foreign keys for empl_doc, I 
can determine from query 3 that there are two foreign key constraints on 
that table and they are called $1 and $2.  From query 4, I should be 
able to see what primary key constraints they point to, except there are 
3 $1's and 2 $2's in that table so I can't really see where they point.  
(Notice that the third and fourth records in the last query are identical!)

Should I be looking somehow else in information_schema to see where 
foreign keys link?  Or am I correct that either:

1. The automatically generated FK constraint_name should be guaranteed 
to be unique; or
2. There should be an additional column in the last query somewhere to 
tell me what table the FK reference is coming from.

Kyle
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Getting FK relationships from information_schema

2004-06-08 Thread Kyle




Tom Lane wrote:

  Kyle <[EMAIL PROTECTED]> writes:
  
  
The problem is, the constraint names ($1, 
$2, etc.) are not unique so I don't know how to join the third query 
into the fourth.

  
  
Hmm, this is messy :-(.  The SQL spec requires constraint names to be
unique within a schema.  Postgres doesn't require them to be unique even
within a table.  

I think this is only an issue when the user relies on postgres to
choose a constraint name automatically.  Seems like a reasonable
approach would be to have postgres choose a name for the constraint
that happens to be unique in the schema (like tablename_fkey_$1).  Then
if the user wants to make named constraints that all have the same
name, he can (and information_schema will be less useful) or he can
rely on the automatically generated names to be a bit more descriptive
in information_schema.

  Can we get away with adding
implementation-specific columns to information_schema tables?
If not, what other alternatives are there?

  

Another possible approach: Does the constraint name showing up in
information_schema really have to be the _real_ name?  Or can we just
make a name consisting of the table name appended to the internal
postgres constraint name.

I think I like this less than the first idea.

Kyle





[SQL] Job opportunity

2004-11-08 Thread Kyle
Action Target has a job opening for a programmer proficient in 
PostgreSQL and Tcl/Tk.  We are located in Provo, UT (home of Novell).

If interested, please contact: [EMAIL PROTECTED]
Thanks!!
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[SQL] altering column constraints

2000-07-24 Thread Kyle

Is it possible to alter a column check constraint without a
dump/reload?  Now that I have RI working, it is a little more tricky to
dump/reload a single table because you lose reference triggers.  So I'm
not sure how to change a column constraint without dumping/reloading the
entire database.

I looked at editing pg_relcheck directly.  I don't know if that is legal
or not, but the rcbin column worries me as I'm not sure how/when this
gets derived from the rcsrc column.  Anyone know if this is possible to
do (even if it requires some poking around)?



begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



[SQL] TCL updates

2000-07-24 Thread Kyle

Jan wrote:
>   Beginning  with Tcl 8.0, dual ported objects got used to deal
>with values. These have (amongst performance issues) alot  of
>benefits.  Changing  all  the  call  interfaces would make it
>impossible to use PL/Tcl with a  pre  8.0  Tcl  installation.
>Since  we're  now  at Tcl 8.3 (the last I've seen), ISTM it's
>not a bad decision to force the upgrade.
>
>Comments?

Jan, I would support this upgrade.  I think TCL runs much faster with
the dual ported calls, right?

I would also like to see proper support for nulls in TCL functions as
this has caused us some problems in the past.


BTW, any news on this:  Is there a plan for a way to allow a FK
reference to a table without giving the user select,update privilege on
the referenced table?  This is currently causing us some grief in our
design.

Kyle



begin:vcard 
n:Bateman;Kyle
x-mozilla-html:FALSE
org:Action Target Inc.
adr:;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;0
fn:Kyle Bateman
end:vcard



[SQL] bug in information_schema?

2005-07-29 Thread Kyle Bateman
I noticed that it seemed a bit slow to query 
information_schema.view_column_usage.   As I look at the code in 
information_schema.sql, I'm not sure why pg_user is referenced twice 
(once without an alias).  It looks like we can take out the first 
pg_user and remove the DISTINCT keyword and this improves the efficiency 
significantly.  It seems to return the same result but in half the 
time.  Anyone see a problem with this?  (The same problem may also be in 
view_table_usage but I haven't done any testing there yet.)


Code from information_schema.sql:
CREATE VIEW view_column_usage AS
   SELECT DISTINCT
  CAST(current_database() AS sql_identifier) AS view_catalog,
  CAST(nv.nspname AS sql_identifier) AS view_schema,
  CAST(v.relname AS sql_identifier) AS view_name,   
  CAST(current_database() AS sql_identifier) AS table_catalog,

  CAST(nt.nspname AS sql_identifier) AS table_schema,
  CAST(t.relname AS sql_identifier) AS table_name,   
  CAST(a.attname AS sql_identifier) AS column_name   


   FROM pg_user, pg_namespace nv, pg_class v, pg_depend dv,
pg_depend dt, pg_class t, pg_namespace nt,
pg_attribute a, pg_user u



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[SQL] Trigger on select?

2005-08-02 Thread Kyle Bateman
Hey, anyone know if it is possible to fire a trigger before a select?  
I'm considering creating some tables which contain data summarized from 
other tables as kind of a cache mechanism.  The hope is I can speed up 
some queries that get executed a lot (and are kind of slow) by 
maintaining data (like sums of big columns, for example).  I was all 
ready to go and then I discovered that trigger-before-select is not 
supported.  (Is it in any DB?)


The idea is this:

Any time I execute a query that would access the summary data, the 
"before select" trigger fires and goes out and builds any summary data 
missing from the summary table.  When I do an insert,update,delete on 
the primary data table, another trigger fires that removes the 
applicable data from the summary table.  This way, I only cache the 
information I need in the summary table, right before I need it.  But it 
can stay there as long as the base information doesn't change so I don't 
have to redo the expensive operation of summarizing it any more often 
than necessary.  Its kind of like an index in a way, but it is not 
maintained at insert/update time.  Rather, it is updated as it is needed.


Anyone have any ideas about how I can accomplish this?

Kyle

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] information_schema problem

2005-10-28 Thread Kyle Bateman

I'm trying to use information_schema.view_column_usage to determine the
native table from which various view columns descend.  This is so my
interface can automatically generate the correct foreign key links from
one view to another.

But in the case where a view references two tables linked by a foreign  
key, the key fields are reported as belonging to both tables.   


The enclosed example shows two tables related by a foreign key relationship.
The view "event_remind_v" really only produces fields from 
event_remind.  But

in order to satisfy the "exists" clause, it also references fields from the
event table.

view_column_usage reports the fields "own_id" and "seq" as belonging to
both table "event_remind" and "event".  My code needs a way to know that
"event_remind" is the table they "really" come from.

1. Is this the correct behavior for information_schema.view_column_usage?
  Should it report a table/column as belonging to more than one table?
  If not, how can I fix it?
  
  The enclosed script includes a (slightly revised) version of   
  view_column_usage that is easier to hack on than the one inside

  information_schema.
  
2. If information_schema.view_column_usage is working right, is there a
  way I can modify my local view_column_usage to distinguish between   
  tables/columns that actually "belong" to the view and related columns

  from a foreign key relationship?

Example code:
-
drop view event_remind_v;
drop table event_remind;
drop table event;
drop view view_column_usage;

-- Contains an entry for each scheduled calendar event
create table event (
   own_id  int4,
   seq int4,
   status  varchar,
   summary varchar,

   primary key (own_id,seq)
);

-- Contains an entry for each reminder for each event
create table event_remind (
   own_id  int4,
   seq int4,
   advance interval,

   primary key (own_id, seq, advance),
   foreign key (own_id, seq) references event on update cascade on 
delete cascade

);

create view event_remind_v as
   select *
   from event_remind r
   where exists (select * from event where own_id = r.own_id and 
seq = r.seq and status = 'open');

;

create view view_column_usage as
 select
   v.relname   as "view_name",
   t.relname   as "table_name",
   at.attname  as "column_name"

   from pg_depend dv, pg_class v, pg_namespace nv,
pg_depend dt, pg_class t, pg_namespace nt, pg_attribute at
   where dv.objid = dt.objid
 and dv.refobjid <> dt.refobjid
 and dv.deptype = 'i'

 and v.relkind = 'v'
 and t.relkind IN ('r', 'v')

 and v.oid = dv.refobjid
 and t.oid = dt.refobjid

 and t.relnamespace = nt.oid
 and v.relnamespace = nv.oid
 and dv.classid= dt.classidand dv.classid= 
'pg_catalog.pg_rewrite'::regclass
 and dv.refclassid = dt.refclassid and dv.refclassid = 
'pg_catalog.pg_class'::regclass


 and t.oid = at.attrelid and dt.refobjsubid = at.attnum
 and nv.nspname = 'public' and nt.nspname = 'public'
;

select view_name,column_name,table_name from
view_column_usage where view_name = 'event_remind_v';
select view_name,column_name,table_name from 
information_schema.view_column_usage where view_name = 'event_remind_v';



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] unplanned sub-select error?

2005-11-21 Thread Kyle Bateman

I have a query:

insert into mtr_reg_v_wt (ropnum, inum, pnum, rquant, value, status, 
ddate, fr_proj, to_proj) values (28985,1,1014,1,(select cost from 
prd_part where pnum = 1014),'work','2005-Nov-15',50,75);


That used to work fine under 7.1.3 but now gives the error:

ERROR:  cannot handle unplanned sub-select

Anyone know what this means?  Is there a good reason why this update 
should no longer work?  Or is this a bug?


Kyle
wyatterp.com


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] unplanned sub-select error?

2005-11-22 Thread Kyle Bateman

Tom Lane wrote:


Kyle Bateman <[EMAIL PROTECTED]> writes:
 


I have a query:
insert into mtr_reg_v_wt (ropnum, inum, pnum, rquant, value, status, 
ddate, fr_proj, to_proj) values (28985,1,1014,1,(select cost from 
prd_part where pnum = 1014),'work','2005-Nov-15',50,75);
   



 


That used to work fine under 7.1.3 but now gives the error:
   



 


ERROR:  cannot handle unplanned sub-select
   



You need to offer a little more context, like what PG version you are
using now and what is the underlying DDL --- I suspect some rules or
views are involved here, but you didn't show them to us.

 

Sorry, you're right.  I have now confirmed that this only happens when 
updating via a view/rule (as you suspected).  Attached is a minimalist 
sql file that demonstrates the same error message from a blank 
database.  I'm using 8.1.0.  I'm pretty sure this problem did not exist 
on 8.0.3.


Kyle

-- Expose the "unplanned sub-select" error message

create table parts (
partnum	varchar(18) primary key,
cost	float8
);

create table shipped (
ttype	char(2),
ordnum	int4,
partnum	varchar(18) references parts,
value	float8,

primary key (ttype, ordnum)
);

create view shipped_view as
select * from shipped where ttype = 'wt';

create rule shipped_view_insert as on insert to shipped_view
do instead insert into shipped
(ttype, ordnum, partnum, value) 
values
('wt', new.ordnum, new.partnum, new.value);

insert into parts (partnum, cost) values (1, 1234.56);

insert into shipped_view 
	(ordnum, partnum, value)
values
(100,1,(select cost from parts where partnum = 1));

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] permissions from system catalogs

2006-06-26 Thread Kyle Bateman
Can anyone point me where to look in the system catalogs (or information 
schema) to determine in advance whether a given user (or the current 
user) will have select permission to a given table or view?


Kyle


---(end of broadcast)---
TIP 6: explain analyze is your friend


[SQL] Schema management tool

2006-06-27 Thread Kyle Bateman




I just saw this very old post and thought I'd respond for the benefit
of the list:

Patrick,

> Could someone give me some advice or point me to an article that would help
> manage views?  I have a view that I use in conjunction with other views,
> and since I have to drop the view and then recreate it, I'm force to drop
> and recreate all other views that the original view was used in.
> TIA

As I recall, the IT staff at Action Target, Inc. two years ago developed a 
non-free tool for this purpose.   They were planning to start a spin-off 
company to market PostgreSQL admin tools, but I don't know how far they got 
with the idea.   

Try contacting Action Target's webmaster to see if you can find anything out:
webmaster ( at ) actiontarget ( dot ) com


We have a free (GPL) tool called "wyseman" that is a framework for
authoring schemas in postgres.  You create a meta-sql structure for
each database object you want in the database.  Then you call wyseman
to drop and/or re-create the object, along with all its dependencies.

This is a great tool for working on a live database.  You can maintain
a pristine, authoritative definition of your schema, and still migrate
your live database along as you make changes to the design.  When you
want to modify a section of the schema that is deep within the tree of
dependencies, wyseman will take care of dropping, rebuilding, and
granting permissions on all the necessary objects in order to get into
the part you want to change.

Wyseman also takes care of building a "data dictionary" where you can
query for things like printable titles, and pop-up helps for tables and
columns in multiple languages.  This is a nice layer to put on top of
postgres and just under your ERP.

These tools are accessible on wyatterp.com.  Due to lack of time, I
have not kept current software in the download area.  We are using much
later stuff in-house.  But I would be happy to share more recent
versions if there is any interest out there.  It would be particularly
helpful if anyone were willing to help maintain the website and
(currently a sad excuse for) documentation.

There is also a run-time library (wylib) for rapid construction of ERP
applications.  We have about 30 applications built in the framework
that run our entire business.

Kyle Bateman
Action Target Inc.





[SQL] clock command regression in pltcl?

2010-01-22 Thread Kyle Bateman

I have the following function defined:

create function _date_week(int4,int4,int4) returns text language pltcl 
immutable as $$

return [clock format [clock scan "$2/$3/$1"] -format "%U"]
$$;

It worked fine in 8.3 but in 8.4 now I try to build an index using the 
function and get:


SQL ERROR: In database query: begin;
create index i_pay_req_empl_id_week on pay_req 
(empl_id,(date_week(wdate)));: ERROR:  invalid command name "clock"

CONTEXT:  invalid command name "clock"
invoked from within
"clock scan "$2/$3/$1""
(procedure "__PLTcl_proc_12360682" line 3)
invoked from within
"__PLTcl_proc_12360682 2003 12 20"
in PL/Tcl function "_date_week"
PL/pgSQL function "date_week" line 13 at assignment

Is this a regression or is there a reason the clock command is no longer 
accessible?


Kyle


--
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Using bitmap index scans-more efficient

2006-08-13 Thread Kyle Bateman
um)::text = 
"inner"."?column10?"))
  ->  Index Scan using apinv_hdr_pkey on apinv_hdr h  
(cost=0.00..3148.16 rows=51016 width=21)
  ->  Sort  (cost=13543.42..13693.47 rows=60020 width=55)
Sort Key: i.vendid, (i.invnum)::text
->  Seq Scan on apinv_items i  (cost=0.00..7197.27 
rows=60020 width=55)
  Filter: (((status = 'en'::bpchar) OR (status = 
'cl'::bpchar) OR (status = 'pd'::bpchar)) AND (proj = ANY ($0)))
->  Index Scan using vend_org_pkey on vend_org v  (cost=0.00..145.52 
rows=1799 width=26)
  ->  Materialize  (cost=3.54..3.55 rows=1 width=4)
->  Seq Scan on acct a  (cost=0.00..3.54 rows=1 width=4)
  Filter: ((code)::text = 'ap'::text)

---

The worst case is the following types of queries (about 5 seconds):

 select * from ledger where proj in (select prog_id from proj_prog where 
proj_id = 4737);
 select l.* from ledger l, proj_prog p where l.proj = p.prog_id and p.proj_id = 
4737;

---
Hash Join  (cost=19032.47..23510.23 rows=6 width=85)
  Hash Cond: ("outer".proj = "inner".prog_id)
  ->  Nested Loop Left Join  (cost=18994.38..23378.41 rows=1700 width=85)
->  Hash Join  (cost=18990.84..23340.87 rows=1700 width=81)
  Hash Cond: ("outer".vendid = "inner".org_id)
  ->  Merge Join  (cost=18935.35..23255.64 rows=1700 width=63)
Merge Cond: (("outer".vendid = "inner".vendid) AND (("outer".invnum)::text = 
"inner"."?column10?"))
->  Index Scan using apinv_hdr_pkey on apinv_hdr h  
(cost=0.00..3148.16 rows=51016 width=21)
->  Sort  (cost=18935.35..19235.45 rows=120041 width=55)
  Sort Key: i.vendid, (i.invnum)::text
  ->  Seq Scan on apinv_items i  (cost=0.00..4152.99 
rows=120041 width=55)
Filter: ((status = 'en'::bpchar) OR (status = 
'cl'::bpchar) OR (status = 'pd'::bpchar))
  ->  Hash  (cost=50.99..50.99 rows=1799 width=26)
->  Seq Scan on vend_org v  (cost=0.00..50.99 rows=1799 
width=26)
->  Materialize  (cost=3.54..3.55 rows=1 width=4)
  ->  Seq Scan on acct a  (cost=0.00..3.54 rows=1 width=4)
Filter: ((code)::text = 'ap'::text)
  ->  Hash  (cost=38.04..38.04 rows=21 width=4)
->  Index Scan using proj_prog_pkey on proj_prog p  (cost=0.00..38.04 
rows=21 width=4)
  Index Cond: (proj_id = 4737)
---

I would like to be able to get the best performance like in the first query but 
without having to enumerate the projects (i.e. using a single query).

The secret seems to be the bitmap index scans.

Any ideas about whether/how this can be done?

Thanks!

Kyle Bateman



---
BTW, The ledger view is built roughly as follows:

create view rp_v_api as
   select
   h.adate as adate,
   (i.price * i.quant)::numeric(14,2)  as amount,
   substring(v.org_name from 1 for 40) as descr, 


   i.proj  as proj,
   i.acct  as acct,
   1   as cr_proj,
   a.acct_id   as cr_acct 


   from (
   apinv_hdr   h
   join apinv_itemsi   on i.vendid = h.vendid and i.invnum 
= h.invnum
   join vend_org   v   on v.org_id = h.vendid
   left join acct  a   on a.code   = 'ap'
   )
   where i.status in ('en','cl','pd');



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] Using bitmap index scans-more efficient

2006-08-14 Thread Kyle Bateman

Florian Weimer wrote:


* Kyle Bateman:

 


Any ideas about whether/how this can be done?
   



If the project tree is fairly consistent, it's convenient to encode it
using intervals instead of parent/child intervals.  IIRC, Celko's "SQL
for smarties" explains how to do this, and Kristian Koehntopp has
written some PHP code to implement it.

 

I agree that this produces a more efficient query for finding the 
projects that are the progeny of another project, but I'm trying to 
figure out how that helps me select the right project transactions from 
my ledger efficiently.


This query produces wonderful results (very fast):

select * from ledger where proj >= 4737 and proj <= 4740;

But I'm assuming that using an interval-encoded project tree, I would 
have to do something like the following to get a progency group:


select * from ledger l, proj p where p.proj_id = l.proj and p.left > 
1234 and p.right < 2345;


The problem (at least according to my initial testing) is that this 
forces a join of the entire ledger and I get my slowest performance 
group (5 seconds).


What am I missing?

Kyle


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] Using bitmap index scans-more efficient

2006-08-15 Thread Kyle Bateman

Tom Lane wrote:


Kyle Bateman <[EMAIL PROTECTED]> writes:
 

But I'm assuming that using an interval-encoded project tree, I would 
have to do something like the following to get a progency group:
   



 

select * from ledger l, proj p where p.proj_id = l.proj and p.left > 
1234 and p.right < 2345;
   



btree has no idea about the constraint (that I imagine exists) that left
<= right.  If you're just doing a simple index on (left, right) then the
above query requires scanning all index entries with left > 1234.  It
would probably help to say

select * from ledger l, proj p where p.proj_id = l.proj and
 p.left > 1234 and p.left < 2345 and p.right < 2345;

so that you can constrain the range of "left" values scanned.
 


Thanks for the replies, Tom and Florian.

My problem is not that it is difficult (or costly) to determine the 
progeny of a given project.  I can determine this in about 90 msec 
regardless of whether I use an adjacency model, interval-encoding, or 
materialized path (current implementation).  The problem is, when I try 
to extract the ledger entries belonging to that progeny from a set of a 
million records, it seems to need to process all million records rather 
than being able to index right into the ones I want.


I'm not very good at reading explain output, but I tried to set up the 
query Tom suggests by creating an interval-encoded project table 
(proj_int) and then joining it to my ledger like so:


select l.* from ledger l, proj_int i where l.proj = i.proj_id and i.lft 
>= 5283 and i.lft < 5300 and i.rgt <= 5300;


On my mini-test-ledger of 100,000 entries, this takes the longest time 
(5 seconds) with the following explain output:


Hash Join  (cost=19018.46..23411.52 rows=14 width=85)
  Hash Cond: ("outer".proj = "inner".proj_id)
  ->  Nested Loop Left Join  (cost=18994.38..23378.41 rows=1700 width=85)
->  Hash Join  (cost=18990.84..23340.87 rows=1700 width=81)
  Hash Cond: ("outer".vendid = "inner".org_id)
  ->  Merge Join  (cost=18935.35..23255.64 rows=1700 width=63)
Merge Cond: (("outer".vendid = "inner".vendid) AND 
(("outer".invnum)::text = "inner"."?column10?"))
->  Index Scan using apinv_hdr_pkey on apinv_hdr h  
(cost=0.00..3148.16 rows=51016 width=21)
->  Sort  (cost=18935.35..19235.45 rows=120041 
width=55)

  Sort Key: i.vendid, (i.invnum)::text
  ->  Seq Scan on apinv_items i  
(cost=0.00..4152.99 rows=120041 width=55)
Filter: ((status = 'en'::bpchar) OR 
(status = 'cl'::bpchar) OR (status = 'pd'::bpchar))

  ->  Hash  (cost=50.99..50.99 rows=1799 width=26)
->  Seq Scan on vend_org v  (cost=0.00..50.99 
rows=1799 width=26)

->  Materialize  (cost=3.54..3.55 rows=1 width=4)
  ->  Seq Scan on acct a  (cost=0.00..3.54 rows=1 width=4)
Filter: ((code)::text = 'ap'::text)
  ->  Hash  (cost=24.06..24.06 rows=10 width=4)
->  Bitmap Heap Scan on proj_int i  (cost=2.26..24.06 rows=10 
width=4)
  Recheck Cond: ((lft >= 5283) AND (lft < 5300) AND (rgt <= 
5300))
  ->  Bitmap Index Scan on i_proj_int_lft_rgt  
(cost=0.00..2.26 rows=10 width=0)
Index Cond: ((lft >= 5283) AND (lft < 5300) AND 
(rgt <= 5300))


That is roughly equivalent to my materialized path method:

select l.* from ledger l where l.projin (select proj_id from proj_v 
where 4737 = any(ppath));



And is quite slow compared to 150 msec when enumerating the progeny 
projects like so:


select l.* from ledger l where l.proj in 
(4737,4789,4892,4893,4894,4895,4933,4934,4935);


Nested Loop Left Join  (cost=19.73..4164.10 rows=7 width=85)
  ->  Nested Loop  (cost=19.73..4139.08 rows=7 width=81)
->  Nested Loop  (cost=19.73..4100.07 rows=7 width=63)
  ->  Bitmap Heap Scan on apinv_items i  
(cost=19.73..1185.71 rows=487 width=55)
Recheck Cond: ((proj = 4737) OR (proj = 4789) OR 
(proj = 4892) OR (proj = 4893) OR (proj = 4894) OR (proj = 4895) OR 
(proj = 4933) OR (proj = 4934) OR (proj = 4935))
Filter: ((status = 'en'::bpchar) OR (status = 
'cl'::bpchar) OR (status = 'pd'::bpchar))

->  BitmapOr  (cost=19.73..19.73 rows=495 width=0)
  ->  Bitmap Index Scan on i_apinv_items_proj  
(cost=0.00..2.19 rows=55 width=0)

Index Cond: (proj = 4737)
  ->  Bitmap Index Scan on i_apinv_items_proj  
(cost=0.00..2.19 rows=55 width=0)

   

Re: [SQL] Using bitmap index scans-more efficient

2006-08-15 Thread Kyle Bateman

Tom Lane wrote:


Kyle Bateman <[EMAIL PROTECTED]> writes:
 

But I'm assuming that using an interval-encoded project tree, I would 
have to do something like the following to get a progency group:
   

select * from ledger l, proj p where p.proj_id = l.proj and p.left > 
1234 and p.right < 2345;
   



Here's an interesting result:

I created a function proj_left(int4) that returns the left interval 
number for a given project.  Then I created an index on the underlying 
table for the ledger  view(which took forever to build) like so:


create index i_test on apinv_items (proj_left(proj));

Now my query:

select * from ledger where proj_left(dr_proj) >= 5283 and 
proj_left(dr_proj) < 5300;


is very speedy.  Problem is, I had to mark the function proj_left() as 
immutable, which it can not be since the left and right values for a 
given project will change any time a project is added, removed, or moved 
around the hierarchy :(


So is there any good way to tell the planner to do several individual 
index scans for the projects involved in the desired progeny, or the 
results together and return the result?  This is what it seems to be 
choosing in the case of the query:


 select * from ledger where proj in 
(4737,4789,4892,4893,4894,4895,4933,4934,4935);




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] Using bitmap index scans-more efficient

2006-08-16 Thread Kyle Bateman

Tom Lane wrote:


Kyle Bateman <[EMAIL PROTECTED]> writes:
 

I'm wondering if this might expose a weakness in the optimizer having to 
do with left joins.
   



Before 8.2 the optimizer has no ability to rearrange the order of outer
joins.  Do you have time to try your test case against CVS HEAD?
 



OK, I figured it out--grabbed the latest snapshot (hope that is what you 
need).


My results are similar:

select l.* from ledg_v1 l, proj p where l.proj = p.proj_id and 5 = 
p.par; (24 msec)

Nested Loop  (cost=0.00..1991.93 rows=480 width=23)
  ->  Nested Loop  (cost=0.00..4.68 rows=6 width=8)
->  Seq Scan on acct a  (cost=0.00..1.12 rows=1 width=4)
  Filter: ((code)::text = 'ap'::text)
->  Index Scan using i_proj_par on proj p  (cost=0.00..3.49 
rows=6 width=4)

  Index Cond: (5 = par)
  ->  Index Scan using i_ledg_proj on ledg l  (cost=0.00..330.17 
rows=83 width=19)

Index Cond: (l.proj = "outer".proj_id)

select l.* from ledg_v2 l, proj p where l.proj = p.proj_id and 5 = 
p.par; (1.25 sec)

Hash Join  (cost=4.63..16768.43 rows=480 width=23)
  Hash Cond: ("outer".proj = "inner".proj_id)
  ->  Nested Loop Left Join  (cost=1.13..14760.13 rows=40 width=23)
->  Seq Scan on ledg l  (cost=0.00..6759.00 rows=40 width=19)
->  Materialize  (cost=1.13..1.14 rows=1 width=4)
  ->  Seq Scan on acct a  (cost=0.00..1.12 rows=1 width=4)
Filter: ((code)::text = 'ap'::text)
  ->  Hash  (cost=3.49..3.49 rows=6 width=4)
->  Index Scan using i_proj_par on proj p  (cost=0.00..3.49 
rows=6 width=4)

  Index Cond: (5 = par)


---(end of broadcast)---
TIP 6: explain analyze is your friend


[SQL] Finding context for error log

2006-09-26 Thread Kyle Bateman

I get the following message in my postgres log:

ERROR:  42601: syntax error at or near "%" at character 269
LOCATION:  yyerror, scan.l:761

I have a fairly complex schema (lots of database objects) and many users 
hitting the db with lots of connections, so I'm not really sure what bit 
of code is causing the problem.  Is there an easy way to get postgres to 
spit out the SQL statement it was parsing when it generated the error?


I've experimented with log_error_verbosity and log_min_messages but 
still haven't seen any SQL in the log.


Kyle


---(end of broadcast)---
TIP 6: explain analyze is your friend


[SQL] PG 8.2beta reordering working for this case?

2006-10-07 Thread Kyle Bateman

Kyle Bateman wrote:


Tom Lane wrote:



Before 8.2 the optimizer has no ability to rearrange the order of outer
joins.  Do you have time to try your test case against CVS HEAD?
 

I've done some more refinement on my accounting ledger system that has 
clarified some of the problems I was having with performance joining to 
a union.  Here's an interesting test case.  I just tried it with PG 
8.2beta1.  I don't claim to understand the new feature of reordering 
queries very well, but it seems like this is related to that feature.  
Since its still a performance problem in 8.2, I thought this might be a 
helpful test case during beta.


To see this demo, run create.sql on a clean database.  It will create 
all the needed tables, views and test data.


Then run q1 and q2 which are very efficient.  Then q3 is the slow one.  
The reason is, it does the union, producing 300,000 records before 
trying to select by project.  It seems like the optimizer should 
internally rewrite the query to look more like what is in q4 (which is 
very fast).


Is there a way to make the optimizer do this?

Kyle Bateman

-- Make test schema for demonstrating how the postgres optimizer might improve
-- performance on joins with unions

-- Contains a record for each project (job-costing) code
-- Projects are arranged in a hierarchical structure (parent/child)
-- --
create table proj (
proj_id		int4 primary key,
title		varchar,
par			int4 references proj on update cascade
);
create index i_proj_par on proj (par);

-- Contains a record for every 2 combinations of projects which are related
-- to each other in the hierarchical project tree 
-- (parent/child, ancestor/progenitor, etc.)
-- --
create table proj_rel (
anst_id		int4 references proj on update cascade on delete cascade,	-- ancestor project number
prog_id		int4 references proj on update cascade on delete cascade,	-- progenitor project number
rel			int4,-- 0=self, 1=child, 2=grandchild, etc.
primary key (anst_id, prog_id)
);

-- Contains a record for each account number and an optional alpha code to identify a sub-ledger
-- --
create table acct (
acct_id		int4 primary key,	-- account number
title		varchar,		-- name of the account
code		varchar			-- alpha code for the account
);
create index i_acct_code on acct (code);

-- Each sub-ledger contains transactions unique to a certain part of the business
-- In addiiton to the standard fields, they all share in common, each sub-ledger
-- contains additional fields that are unique to it (so they can not all be
-- stored in a single table).  In our actual implementation, these sub-ledgers
-- are actually implemented as views joining even lower level tables.
-- --
create table subledg_A (
rid			int4 primary key,		-- record ID
amount		numeric(14,2),
proj		int4 references proj on update cascade on delete cascade,
unique_A		varchar-- some other data
);
create index i_subledg_A_proj on subledg_A (proj);
-- --
create table subledg_B (
rid			int4 primary key,		-- record ID
amount		numeric(14,2),
proj		int4 references proj on update cascade on delete cascade,
unique_B		varchar-- some other data
);
create index i_subledg_B_proj on subledg_B (proj);
-- --
create table subledg_C (
rid			int4 primary key,		-- record ID
amount		numeric(14,2),
proj		int4 references proj on update cascade on delete cascade,
unique_C		varchar-- some other data
);
create index i_subledg_C_proj on subledg_C (proj);

-- These views allow a standard account code to presented in the appropriate ledgers
-- --
create view subview_A as select
   'AP ' || rid as trans_id,
l.amount, l.proj,
a.acct_id as acct
from	subledg_A	l
join	acct		a on a.code = 'ap';
-- --
create view subview_B as select
   'AR ' || rid as trans_id,
l.amount, l.proj,
a.acct_id as acct
from	subledg_B	l
join	acct		a on a.code = 'ar';
-- --
create view subview_C as select
   'PR ' || rid as trans_id,
l.amount, l.proj,
a.acct_id as acct
from	subledg_C	l
join	acct		a on a.code = 'pr';

-- General ledger - this should contain all transactions from all subledgers
-- --
create view gen_ledg as
select trans_id, amount, proj, acct from subview_A
union 
select trans_id, amo

Re: [SQL] PG 8.2beta reordering working for this case?

2006-10-08 Thread Kyle Bateman

Tom Lane wrote:


Kyle Bateman <[EMAIL PROTECTED]> writes:
 


Is there a way to make the optimizer do this?
   



Sorry, that's not happening for 8.2.  Consider using a union all (not
union) across the subledg_N tables directly and then joining to that.
That boils down to being a partitioning case and I think probably will
be covered by the 8.2 improvements.

Yup, union all is much more efficient.  It hadn't really occurred to me 
the difference between union and union all.  But it makes sense to 
eliminate the need for a unique sort.  The q3 query went from 10 seconds 
to 1 second with just the addition of union all in the general ledger.


BTW, explain analyze still says 10 seconds of run time (and takes 10 
seconds to run), but when I remove the explain analyze, the query runs 
in about a second.  What's that all about?


Also, I came up with the view shown in the attachment.  It is still much 
faster than joining to the union-all ledger (40 ms).  I'm not sure why 
because I'm not sure if explain analyze is telling me the real story (I 
see a sequential scan of the ledgers in there when it runs 10 seconds).  
I'm not sure what it's doing when it runs in 1 second.


Kyle

-- This view is a possible workaround for the problem
drop view gen_ledg_pr;

--explain analyze
create view gen_ledg_pr as
  select lg.*, pr.anst_id
from 	subview_A	lg
join	proj_rel	pr	on pr.prog_id = lg.proj

  union all select lg.*, pr.anst_id
from 	subview_B	lg
join	proj_rel	pr	on pr.prog_id = lg.proj

  union all select lg.*, pr.anst_id
from 	subview_C	lg
join	proj_rel	pr	on pr.prog_id = lg.proj
;

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] olympics ranking query

2007-03-29 Thread Kyle Bateman
Several years ago someone posted a question about how to achieve a 
running total of columns in sql.  I have been trying to find a solution 
to a similar problem for some time and just came up with something that 
works great for me so I thought I'd post it for the benefit of the list.


The problem is to produce a query like the following:

select date,amount,run_sum(amount) from ledger order by date;

DateAmount Sum
--- -- -
2007-Jan-01  10.00 10.00
2007-Jan-02  20.00 30.00
2007-Jan-05   5.00 35.00
2007-Jan-10  -3.00 32.00
.
.
.

Using pl/tcl, I made the following function:

#Keep a running sum on a column where tag and trans are used to keep the
#results distinct from other calls to the same function
#Usage: run_sum(value,tag,trans)
#---
function run_sum(numeric,text,text) {} {
 returns numeric language pltcl called on null input as $$
   if {![info exists GD(sum.$2.$3)]} {
   set GD(sum.$2.$3) 0.00
   }
   if {[argisnull 1]} {
   return $GD(sum.$2.$3)
   } else {
   return [set GD(sum.$2.$3) [expr $GD(sum.$2.$3) + $1]]
   }
$$;}

Then I added a wrapper function to automatically produce a separate 
namespace for each statement instance that uses the query:


#Keep a running sum on a column where tag is used to keep the results 
distinct

#Automatically keeps results distinct for each separate statement
#Usage: run_sum(value,tag)
#---
function run_sum(numeric,text) {run_sum(numeric,text,text)} {
 returns numeric language sql as $$
   select run_sum($1,$2,statement_timestamp()::text);
$$;}

Now you can do:

select date,amount,run_sum(amount,'amount') from ledger;

to get an initial result.  The only problem is now ordering the data.  
If you just put an 'order by' clause on the end, you don't get what you 
might expect because the ordering happens after the function has 
produced its result.  So I do the following to order and sum it correctly:


select date,amount,run_sum(amount,'amount') from (select * from ledger 
order by date) ss;


The use of the "tag" parameter allows you to use this on multiple 
columns such as:


select date,debit,credit,
   run_sum(debit,'debit')::numeric(14,2) as debits,
   run_sum(credit,'credit')::numeric(14,2) as credits
   from (select * from ledger order by date) ss;

Enjoy,

Kyle


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] Clarified Question

2000-05-24 Thread Kyle Bateman



 
How can I create a function that will take in two variables and return
an integer, when one of the variables is the tablename ?!
I have tried :
   
create function tst_func(text, varchar(16))
   
as
   
'BEGIN
   
result=select max(history_id)+1 from $1
   
where client_id = $2;
   
return result;

END;'
 
This kind of thing does work in pltcl but AFAIK you can't specify a table
name with a parameter in plpgsql.  This may give you the idea:
create function myfunc(text,text) returns result as '
    set res [spi_exec "select f1 from $1 where f2
= \'$2\']
    return $res
    ' language 'pltcl';
 

begin:vcard 
n:Bateman;Kyle
tel;fax:801-377-8096
tel;work:801-377-8033x101
x-mozilla-html:FALSE
url:www.actiontarget.com
org:Action Target Inc
adr:;;PO Box 636;Provo;UT;84603;US
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;-15520
fn:Kyle Bateman
end:vcard



[SQL] create constraint trigger

2000-05-29 Thread Kyle Bateman


Hi Jan:
In response to your suggestion about possibly being able to use
"create constraint trigger," I have tried the following:
I have the following trigger function (probably not all that important
what it does...):
create function prd_parm_tf_iu ()
   returns opaque as '
 declare
    trec record;
    baseholder varchar;
    begin
    select into trec min(pos)
as n,max(pos) as m,count(*) as c
   
from prd_parm where new.base = base;
    if trec.n != 1 or trec.m
!= trec.c then
   raise
exception \'Parameters for base % not in order\', new.base;
    end if;
    return new;
    end;' language 'plpgsql';
This seems to work pretty good:
create trigger prd_part_tr_iu after insert or update on prd_parm
    for each row execute procedure prd_parm_tf_iu();
But when I create it with "create constraint trigger" as shown next,
the trigger doesn't seem to be invoked.  It says it is created, but
it allows data to pass that would not be allowed with the "create trigger." 
So I'm assuming I'm doing something wrong.
create constraint trigger prd_part_tr_iu after insert or update
on prd_parm
   deferrable initially deferred
   for each row execute procedure prd_parm_tf_iu();
Any ideas?
Also, its not clear to me how to drop a
trigger created with "create constrataint trigger."
It would be very helpful if I could figure
out how to make my triggers deferred.  There are things stored in
one table that have to be a certain way if other things are a certain way
in a related table.  It is more than what foreign key relationships
can handle.  It would really be great if I could figure out how to
have the backend monitor the consistency of these details.
Without deferral, I can check the consistency
in one direction.  But if I try to constrain it in both directions,
I get into a problem where I can't update either table without breaking
the constraints.  I'm guessing that if I can get deferral to work,
I should be able to update both tables in a transaction and be OK.
Kyle
 
 

begin:vcard 
n:Bateman;Kyle
tel;fax:801-377-8096
tel;work:801-377-8033x101
x-mozilla-html:FALSE
url:www.actiontarget.com
org:Action Target Inc
adr:;;PO Box 636;Provo;UT;84603;US
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;-15520
fn:Kyle Bateman
end:vcard



[SQL] Re: create constraint trigger

2000-05-31 Thread Kyle Bateman


Kyle Bateman wrote:
Hi Jan:
But when I create it with "create constraint trigger" as shown next,
the trigger doesn't seem to be invoked.  It says it is created, but
it allows data to pass that would not be allowed with the "create trigger." 
So I'm assuming I'm doing something wrong.
create constraint trigger prd_part_tr_iu after insert or update
on prd_parm
   deferrable initially deferred
   for each row execute procedure prd_parm_tf_iu();
OK, here's a reply to my own post.  I've got the constraint trigger
working now.  The problem was that it was never getting dropped. 
I still don't know how to drop the constraint.  I thought by dropping
the trigger function it would get the idea.  When I ran the create
constraint again, it said it was created (no errors) so I figured everything
was OK, but it was still running the previously defined constraint (apparently).
I dropped the table and started from scratch and it seems to work fine
now.
So is there a way to "drop constraint trigger" without having to drop
the table?
Kyle
 

begin:vcard 
n:Bateman;Kyle
tel;fax:801-377-8096
tel;work:801-377-8033x101
x-mozilla-html:FALSE
url:www.actiontarget.com
org:Action Target Inc
adr:;;PO Box 636;Provo;UT;84603;US
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;-15520
fn:Kyle Bateman
end:vcard



[SQL] Deferral of primary key constraint

2000-06-02 Thread Kyle Bateman


Is it possible to defer the check on a primary key constraint (or a check
constraint, for that matter).  Here is an example that shows why it
would be nice to be able to do so.  We have a real-life scenario that
is similar (but much more complex).
drop table btab;
create table btab (
    base   
varchar,
    pos
int4,
    cmt
varchar,
   primary key (base, pos)
);
insert into btab (base,pos,cmt) values ('aa',1,'The');
insert into btab (base,pos,cmt) values ('aa',2,'quick');
insert into btab (base,pos,cmt) values ('aa',3,'grey');
insert into btab (base,pos,cmt) values ('aa',4,'fox');
insert into btab (base,pos,cmt) values ('bb',3,'dog');
insert into btab (base,pos,cmt) values ('bb',2,'brown');
insert into btab (base,pos,cmt) values ('bb',1,'The');
select * from btab order by base,pos;
begin;
delete from btab where base = 'aa' and pos = 2;
update btab set pos = pos - 1 where pos > 2 and base = 'aa';
commit;
select * from btab order by base,pos;
begin;
update btab set pos = pos + 1 where pos >= 2 and base = 'bb';
insert into btab (base,pos,cmt) values ('bb',2,'slow');
commit;
select * from btab order by base,pos;
The last transaction fails (on my box, anyway) because of the primary
key index.
We would like to be able to do inserts/deletes to a list of records
and still ensure that they are in contiguous sequential order so we want
to renumber higher records if a new record is inserted in the middle. 
The sequence is part of the primary key and we want to ensure uniqueness. 
Some renumbering will work (by chance) if the records happen to get adjusted
in the right order.  But if one of the updates tries to rename to
an already existing record, it fails.
How hard is it to take the deferral mechanism you have for foreign key
references and apply it to the primary key too?  It would also be
handy to be able to defer a check constraint.
 

begin:vcard 
n:Bateman;Kyle
tel;fax:801-377-8096
tel;work:801-377-8033x101
x-mozilla-html:FALSE
url:www.actiontarget.com
org:Action Target Inc
adr:;;PO Box 636;Provo;UT;84603;US
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;-15520
fn:Kyle Bateman
end:vcard



[SQL] References and privileges

2000-06-07 Thread Kyle Bateman

[EMAIL PROTECTED] (Jan Wieck) writes:
>> If one does:
>>
>> [...]
>> grant select on master to a_user;
>> grant select, insert, update, delete on detail to a_user;
>>
>> then if login as "a_user" and does:
>>
>> insert into detail (id, master_id) values (1, 10);
>>
>> this will result in: "ERROR:   master: Permission denied".
>>
>> This seems a bug to me ? Isn't it ?

> Outch,

> yes,  we  missed  something  here.  Peter,  you  said  you'll
> probably work  on  the  ACL  stuff  after  7.0.  We  need  to
> coordinate that work with the function manager redesign to go
> for SETUID triggers and functions.

Hey Jan:

This one just bit me hard.  We're trying to implement RI so a foreign
key (employee ID number) is verified to exist in the employee table.
Problem is, we have to grant everyone and their dog access to the
employee table (select,update, no less) which contains all kinds of
sensitive data about employees.

We even tried making a limited view of the employee table and
referencing that.  No luck, the RI triggers try to find an OID column in
the referenced table (view).  If we could name a column "oid" in a view,
we might have a chance with this approach.

How hard would it be to get the "grant references" thing going?  Is
there anything I could do to help?

Grant permissions on columns would also solve this problem.  Is that one
huge or not so huge?

Kyle



begin:vcard 
n:Bateman;Kyle
tel;fax:801-377-8096
tel;work:801-377-8033x101
x-mozilla-html:FALSE
url:www.actiontarget.com
org:Action Target Inc
adr:;;PO Box 636;Provo;UT;84603;US
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;-15520
fn:Kyle Bateman
end:vcard



[SQL] finding (and recycling) holes in sequences

2000-06-26 Thread Kyle Bateman



 
If one has a unique-id generating sequence that sometimes is bound
to
have holes in it (ie: it could happen that a nextval(seq) happens
without a corresponding INSERT in the table), then how could one
efficiently scan for these holes to recycle them in subsequent INSERTs?
I'm just looking for a "standard" way of doing this if such a thing
exists.
Louis-David Mitterrand - [EMAIL PROTECTED] - http://www.apartia.fr
 
If you want to create a sequence of numbers with no holes, you might consider
NOT using sequences at all (the are really meant for something different). 
Rather, each time you wish to do an insert, query the existing data to
see what the maximum value is existing in the database.  Add 1 to
that and use this as your new value.  This is guaranteed to not have
any holes in the sequence.  To make inserts easier, you can define
an sql function that will return the max value.  Here's an example:
create function maxid () returns int4 as '
    declare
    mv int4;
    begin
    select max(id) into
mv from mytab;
    if mv is null then return
0; end if;
    return mv;
    end;
' language 'plpgsql';
create table mytab (
    id int4 primary key default (maxid() + 1),
    data text
);
insert into mytab (data) values ('Hi there');
insert into mytab (data) values ('Howdy');
insert into mytab (data) values ('Guten Tag');
insert into mytab (data) values ('Terve');
select * from mytab;
 
 
BTW hackers, this is a common enough request, I wonder if there should
be a built-in feature to make it easier for people to put this into the
default value in the table definition i.e.:
create table mytab (
    keycolumn int4 primary key default coalesce(max(keycolumn),0)+1,
    data text
);
Wouldn't this eliminate the need for rollback-able sequences?
 

begin:vcard 
n:Bateman;Kyle
tel;fax:801-377-8096
tel;work:801-377-8033x101
x-mozilla-html:FALSE
url:www.actiontarget.com
org:Action Target Inc
adr:;;PO Box 636;Provo;UT;84603;US
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;-15520
fn:Kyle Bateman
end:vcard