[GENERAL] Work Scheduling DB Design

2009-08-28 Thread Karl Nack

Hello,

I'm trying to develop a database schema to schedule and record completion 
of maintenance. I feel like I'm getting a bit wrapped around the wheel on 
this one, so I was hoping someone might be able to offer some suggestions.


Here are the basic tables I've come up with:

CREATE TABLE task (
task_id  SERIAL PRIMARY KEY,
task_name  VARCHAR NOT NULL REFERENCES task_type,
-- other fields omitted
);

CREATE TABLE schedule (
schedule_id  SERIAL PRIMARY KEY,
task_id  INT NOT NULL REFERENCES task,
start_date  DATE NOT NULL,
end_date  DATE NOT NULL,
periodicity  INTERVAL  -- task is only done once if NULL
);

CREATE TABLE work_effort (
work_effort_id  SERIAL PRIMARY KEY,
task_id  INT NOT NULL REFERENCES task,
completion_date  DATE NOT NULL
);


Here's some sample data for the schedule table:

schedule_id | task_id | start_date | end_date   | periodicity 
+-+++

 1  | 1   | 05/01/2009 | 05/30/2009 | null
 2  | 2   | 06/01/2009 | 07/31/2009 | 2 weeks


Which allows me to generate the following schedule:

task_id | start_date | end_date
++---
 1  | 05/01/2009 | 05/30/2009
 2  | 06/01/2009 | 06/14/2009
 2  | 06/15/2009 | 06/28/2009
 2  | 06/29/2009 | 07/12/2009
 2  | 07/13/2009 | 07/26/2009
 2  | 07/27/2009 | 07/31/2009


One of my objectives is to allow some flexibility in being able to change 
or update a schedule. For example: "beginning on 7/1/09, task 2 needs to 
be done monthly." I don't know if this is best done by being able to 
update an existing schedule, or superceding old schedules with new ones. 
Either way, it seems like things could get a little hairy in terms of 
re-calculating a schedule's periods and maintaining relationships to 
completed work.


Which brings me to my next problem: how to properly relate work efforts to 
a schedule. Ideally, I'd like to accomplish the following:


1. Given a work effort, determine which period of the schedule it applies 
to.


2. Given a schedule and some work efforts, determine if/which periods of 
the schedule have not had work done.


3. Enforce that work is completed within the timeframe of a schedule, or 
more specifically, within a specific period of the schedule.


5. Enforce that work is done order -- i.e., work needs to be done such 
that the periods of a schedule are completed sequentially.


I'm hesitant to pre-generate work efforts based off a schedule, since 
changing a schedule means I'll have to regenerate the work efforts, not to 
mention that scheduled work might require multiple work efforts to 
complete. So I'd like to be able to enter in work as it's completed, and 
then enforce/validate that it's the _right_ work being done.


In my (very limited) experience, I've found that a properly designed 
database makes the application side of things almost mind-numbingly easy 
to implement. But everything I've come up with seems like it would 
require some hideous triggers and/or lots of application code to 
implement. I don't really see a nice clean solution here, which makes me 
think I'm missing something.


If anyone has suggestions or some experience they could offer with this, 
I'd greatly appreciate it.


Thanks!

Karl Nack

Futurity, Inc.
773-506-2007

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Implementing "thick"/"fat" databases

2011-07-22 Thread Karl Nack
I've been following a few blogs
(http://database-programmer.blogspot.com/,
http://thehelsinkideclaration.blogspot.com/) that make a very compelling
argument, in my opinion, to move as much business/transactional logic as
possible into the database, so that client applications become little
more than moving data into and out of the database using a well-defined
API, most commonly (but not necessarily) through the use of stored
procedures. Although there seems to be a lot of discussion out there of
the reasons why one might want to do this, I'm really at a loss for
finding good, concrete examples of how to do it. Consequently, I'm
hoping that somebody can share their experience(s), or point me to some
examples, of doing this with PostgreSQL. I'd consider myself fairly
well-versed in using the various features of PostgreSQL to enforce data
integrity, but much less so for implementing transactional logic.

To focus on a more concrete example, let's consider adding a financial
transaction to the database. The "traditional" way to do this, with the
business logic in the application layer, leaves us with two steps:
insert the transaction "header", then insert the line items:

BEGIN;

INSERT INTO transaction (id, date, description)
VALUES (1, CURRENT_DATE, 'Transaction 1');

INSERT INTO line_item (transaction_id, account_id, amount)
VALUES (1, 1, 50), (1, 2, -50);

END;


Now if we start moving this logic to the database, we'd have something
like:

BEGIN;
SELECT create_transaction(1, current_date, 'Transaction 1');
SELECT create_line_item(1, 1, 50);
SELECT create_line_item(1, 1, -50);
END;


But we've actually taken a step back, since we're making a round-trip to
the database for each line item. That could be resolved by doing:

BEGIN;

SELECT create_transaction(1, current_date, 'Transaction 1');

SELECT create_line_item(transaction_id, account_id, amount)
FROM (VALUES (1, 1, 50), (1, 2, -50))
AS line_item (transaction_id, account_id, amount);

END;


Better, but still not good, since we're invoking the function for each
individual line item, which ultimately means separate INSERTs for each
one. What we'd want is something like:

BEGIN;
SELECT create_transaction(1, current_date, 'Transaction 1');
SELECT create_line_items(((1, 1, 50), (1, 2, -50)));
END;


But this still falls short, since we're still basically managing the
transaction in the application layer. The holy grail, so to speak, would
be:

SELECT create_transaction(1, current_date, 'Transaction 1', ((1, 50),
(2, -50)));


Perhaps I just need to spend more time digging through the
documentation, but I really have no idea how to do something like this,
or if it's even possible. I'm really hoping someone can provide an
example, point me to some resources, or even just share their real-world
experience of doing something like this. It would be very much
appreciated.

Thanks.


Karl Nack

Futurity, Inc
5121 N Ravenswood Ave
Chicago, IL 60640
773-506-2007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-23 Thread Karl Nack
> > Now, if you are doing double-entry bookkeeping this doesn't provide
> > enough consistency, IMO.  You can't check inside the function to
> > ensure that the transaction is balanced.
> 
> One option to consider is restricting final tables but making staging
> tables available.

I would implement this using triggers.

First, add a boolean "is_balanced" column to the transaction table,
along with the following trigger:

create or replace function check_txn_balance()
returns trigger
language plpgsql
as $$
declare
_amt numeric;
begin
if 'UPDATE' = TG_OP and new.is_balanced then
return null;
end if;

select sum(amt)
into _amt
from line_item
where txn_id = new.id;

if _amt <> 0 then
raise exception 'unbalanced transaction';
end if;

update txn
set is_balanced = true
where id = new.id;

return null;
end;
$$;

create constraint trigger check_txn_balance
after insert or update on txn
deferrable initially deferred
for each row execute procedure check_txn_balance();


Then, whenever we add, remove, or update a line item, unbalance the
parent transaction, which triggers the balance check:

create or replace function unbalance_txn()
returns trigger
language plpgsql
as $$
begin
if 'UPDATE' = TG_OP then
if (new.txn_id, new.amt) = (old.txn_id, old.amt) then
return null;
end if;
end if;

if TG_OP in ('INSERT', 'UPDATE') then
update txn
set is_balanced = false
where (id, is_balanced) = (new.txn_id, true);
end if;

if TG_OP in ('DELETE', 'UPDATE') then
update txn
set is_balanced = false
where (id, is_balanced) = (old.txn_id, true);
end if;

return null;
end;
$$;

create trigger unbalance_txn
after insert or delete or update on line_item
for each row execute procedure unbalance_txn();


At least, this seems to be a fairly efficient and foolproof way to do it
to me.


Karl Nack

Futurity, Inc
5121 N Ravenswood Ave
Chicago, IL 60640
773-506-2007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-23 Thread Karl Nack
>  For example, you could use XML to describe the
> data (I'm no fan of XML, but it does suit a need here and allows
> validation of the provided data), especially as Postgres has XML parsing
> functionality.

Would you go so far as to suggest making the entire parameter an XML
statement, something like:
CREATE FUNCTION create_transaction (txn xml) ...

I don't know, it seems like with converting everything to and from XML,
we really haven't made things any better, just added the complexity of
adding an intermediate technology. Also, I'm not a fan of XML either,
and for the most part have managed to steer relatively clear of it so
far. Getting pulled into that black whole is something I'd rather avoid,
if possible


> Or you could use a more sophisticated procedural language (plpython or
> plphp, for example) that's capable of marshalling and unmarshalling data
> structures to strings and vice versa (eg. '{foo:1,bar:2}').

I haven't looked into the other procedural languages besides PGSQL,
although I am familiar with Python in general. Perhaps that may be the
way to go ...


> You would still have a SELECT statement that INSERTs data, which is
> semantically a bad thing to do IMHO.

True, although others have suggested always returning something useful
from the function. For example, it's more likely the transaction id
would be assigned using a sequence inside the database. The function
could then return that id after successfully creating the transaction.


Karl Nack

Futurity, Inc
5121 N Ravenswood Ave
Chicago, IL 60640
773-506-2007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-23 Thread Karl Nack
> In LedgerSMB, we take this a step further by making the procedures
> into discoverable interfaces, so the application logic itself is a
> sort of thin glue between a UI layer and the database procedure layer.
>  One thing I would suggest is to try to keep API calls as atomic as
> possible.  You want to enforce consistency and so you need to have all
> relevant inputs passed to the function.  See below for a suggested
> change to your API.

Would you say LedgerSMB follows the thick/fat database principle? If so,
I shall have to spend some time with your source code. I'd be very
curious to see how you handle this.

> > BEGIN;
> > SELECT create_transaction(1, current_date, 'Transaction 1');
> > SELECT create_line_item(1, 1, 50);
> > SELECT create_line_item(1, 1, -50);
> > END;
> 
> Now, if you are doing double-entry bookkeeping this doesn't provide
> enough consistency, IMO.  You can't check inside the function to
> ensure that the transaction is balanced.  

Yes, but I'd implement the constraint "all transactions must balance" as
a trigger that fires when the transaction is complete. This would
enforce data integrity regardless of whether or not the database API is
used, which I think is also important.


Karl Nack

Futurity, Inc
5121 N Ravenswood Ave
Chicago, IL 60640
773-506-2007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-23 Thread Karl Nack
> > SELECT create_transaction(1, current_date, 'Transaction 1', ((1,
> > 50), (2, -50)));
>
> Well, not quite, because specifying the number "50" twice would be
> ridiculous for such a non-generic function; you can calculate the "-
> 50" from it in the function.

Not if there were more than two line-items per transaction. A paycheck
is a good example, where generally one or more income accounts are
credited and multiple tax accounts as well as one or more asset accounts
are debited. Ensuring that all the line-items add up to 0 would be one
of the data integrity rules implemented in the database (though most
likely checked in the application layer as well).

> A general rule of thumb, however you would design a routine in a
> normal programming language, try to do it that way in PL/PgSQL,
> assuming that PL/PgSQL is a competent language, and then tweak to
> match what you actually can do.

In the language I'm most familiar with, PHP, I could do this with an
associative array:

$transaction = array(
'id' => 1,
'date' => date('Y-m-d'),
'description' => 'Transaction 1',
'line_items' => array(
array('account_id' => 1, 'amount' => 50),
array('account_id' => 2, 'amount' => -50),
),
);

>From which I can easily build the appropriate SQL statements. This would
be very similar in Python. I wonder if this would be achievable in
PL/PGSQL, maybe through the use of composite types and/or domains?


Karl Nack

Futurity, Inc
5121 N Ravenswood Ave
Chicago, IL 60640
773-506-2007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-27 Thread Karl Nack
> The current svn trunk (to be 1.3) does.

So how far do you take this? I've been playing around with plpgsql a bit
and am pretty encouraged by what I'm discovering; now I'm at a point
where I'm thinking, "how far do/can I go with this?"

Probably the best example is input validation. Constraints and triggers
on the database will (or at least should) prevent bad data from being
added to the database, but the UI generally needs to provide more
informative messages than errors thrown by the database, and provide
errors messages for every invalid field, whereas the database will
fail/stop on the first error. Consequently, I find that much of the data
logic ends up being duplicated outside of the database to enhance the
user experience. Might there be a way to move these validation routines
into the database as well, and unify all the data logic into one place?

> > Yes, but I'd implement the constraint "all transactions must balance" as
> > a trigger that fires when the transaction is complete. This would
> > enforce data integrity regardless of whether or not the database API is
> > used, which I think is also important.
> 
> That's problematic to do in PostgreSQL because statement-level
> triggers don't have access to statement args, and I don't believe they
> can be deferred.

In another email I did provide a simple example of how I might implement
this; I don't know if you saw it. Of course, that assumes your database
isn't designed by  :-)


Karl Nack

Futurity, Inc
5121 N Ravenswood Ave
Chicago, IL 60640
773-506-2007


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-27 Thread Karl Nack
> I wonder which other languages have first class support for these areas
> of Pg?

It'd be nice if PHP could get there. :p Maybe it's time to look at some
of these other languages. Or finally learn C and try hacking on the
extension myself.


Karl Nack

Futurity, Inc
5121 N Ravenswood Ave
Chicago, IL 60640
773-506-2007


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing "thick"/"fat" databases

2011-07-27 Thread Karl Nack
> The best option is to use exceptions to communicate to the application
> what went wrong and then allow the application to handle those
> exceptions in many cases.  In other cases, the application may need to
> know which inputs are mandatory.

So other than sanitizing input and making sure it's an appropriate data
type, are you relying solely on the database for all your input
validation?

I guess a good generic example might be setting up a new user account,
which might require a username, password, real name, and birth date. We
might also put a reasonable constraint that, at the very least, the
birth date cannot be greater than the current date. Now if the user
submitted a blank page, ideally the page would come back with four
errors, one for each field that was left blank. If you submitted this
basically as-is to the database, it'd arbitrarily fail on the first
column that didn't meet the NOT NULL constraint, and that would be the
only error sent back to the client. So yes, this would work, but in
theory it could take four or five times before every error was
identified and the user notified.


> > In another email I did provide a simple example of how I might implement
> > this; I don't know if you saw it. Of course, that assumes your database
> > isn't designed by  :-)
> 
> I didn't see it even looking back (I saw the initial attempt and the
> PHP sample).

Here's a link to the archived message:
http://archives.postgresql.org/pgsql-general/2011-07/msg00631.php

Feel free to comment/praise/criticize! :-)

> SELECT create_transaction(1, current_date, 'Transaction 1', '{(1, 50),
> (2, -30), (3, -20)}'); as well since we are now talking about arrays of
> records.
> 
> But this is a pain to code to/from SQL in a robust way.  Good db
> drivers sometimes handle this automatically though.

Yes, I've coded exactly this with a plpgsql function and have been
mostly pleased by how easy it is. Unfortunately, at least with PHP, it's
not so straight forward to format user input into an SQL statement that
uses arrays and composite types. It's even worse going the other way --
just Google how to convert SQL arrays into PHP arrays. :-(


Karl Nack

Futurity, Inc
5121 N Ravenswood Ave
Chicago, IL 60640
773-506-2007


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How best to implement a multi-table constraint?

2008-10-20 Thread Karl Nack
Hello all,

I'm a bit of a newb designing a database to hold landcover information for
properties in a city. Here's some simple sample data:

property:
property_name*, property_area
-
sample house, 2500


property_landcover:
property_name*, landcover_name*, landcover_area
---
sample house, building, 1000
sample house, grass, 1000
sample house, concrete, 500


Now, I need to check that the sum of landcover_area for a property matches
the property_area.

It seems like I have three obvious options:

1. A constraint trigger that sums up landcover area and compares it to the
property area.

Downside: The trigger will run for every row that's updated in these two
tables, although it only needs to run once for each property.


2. A statement-level trigger that does the same thing as #1.

Downside: Since I don't have access to the updated rows, I'll have to
check the entire property table against the entire property_landcover
table. It seems like this could get expensive if either of these tables
gets very large.


3. Use a 3rd table to hold the total landcover area for each property. Use
row-level triggers to keep this 3rd table updated. Use a statement-level
trigger (or table constraint) to ensure the total landcover area matches
the property area.

Downside: Although I avoid redundant checks, my understanding is that
UPDATE is a fairly expensive operation, so it might not actually perform
any better.


Although my tables are small right now, they may potentially have to hold
an entire city's worth of properties, so I'm interested in finding a
solution that scales.

Can anyone offer some feedback or suggestions on which of these options to
use? Or perhaps even another solution that hasn't occurred to me?

Thanks!

-Karl

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How best to implement a multi-table constraint?

2008-10-21 Thread Karl Nack
I agree, but it seems to me that property_area isn't a strictly derived
value.

It's possible to change the makeup of a property's landcover -- for
example, remove some concrete and plant more grass, or add an extension to
the building -- but the overall property area should remain constant. I
feel like I should probably include some kind of constraint to enforce
this.

Am I needlessly over-complicating this?

-Karl


- Original Message -
From: [EMAIL PROTECTED]
Sent: Tue, October 21, 2008 7:31
Subject:Re: [GENERAL] How best to implement a multi-table constraint?


Why do you need to store the total area at all (property_area)? This
value can easily be calculated with an group by query.

On Mon, Oct 20, 2008 at 10:56 PM, Karl Nack <[EMAIL PROTECTED]> wrote:
> Hello all,
>
> I'm a bit of a newb designing a database to hold landcover information for
> properties in a city. Here's some simple sample data:
>
> property:
> property_name*, property_area
> -
> sample house, 2500
>
>
> property_landcover:
> property_name*, landcover_name*, landcover_area
> ---
> sample house, building, 1000
> sample house, grass, 1000
> sample house, concrete, 500
>
>
> Now, I need to check that the sum of landcover_area for a property matches
> the property_area.
>
> It seems like I have three obvious options:
>
> 1. A constraint trigger that sums up landcover area and compares it to the
> property area.
>
> Downside: The trigger will run for every row that's updated in these two
> tables, although it only needs to run once for each property.
>
>
> 2. A statement-level trigger that does the same thing as #1.
>
> Downside: Since I don't have access to the updated rows, I'll have to
> check the entire property table against the entire property_landcover
> table. It seems like this could get expensive if either of these tables
> gets very large.
>
>
> 3. Use a 3rd table to hold the total landcover area for each property. Use
> row-level triggers to keep this 3rd table updated. Use a statement-level
> trigger (or table constraint) to ensure the total landcover area matches
> the property area.
>
> Downside: Although I avoid redundant checks, my understanding is that
> UPDATE is a fairly expensive operation, so it might not actually perform
> any better.
>
>
> Although my tables are small right now, they may potentially have to hold
> an entire city's worth of properties, so I'm interested in finding a
> solution that scales.
>
> Can anyone offer some feedback or suggestions on which of these options to
> use? Or perhaps even another solution that hasn't occurred to me?
>
> Thanks!
>
> -Karl
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

- End of original message -


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How best to implement a multi-table constraint?

2008-10-21 Thread Karl Nack
I agree, but it seems to me that property_area isn't a strictly derived
value.

It's possible to change the makeup of a property's landcover -- for
example, remove some concrete and plant more grass, or add an extension to
the building -- but the overall property area should remain constant. I
feel like I should probably include some kind of constraint to enforce
this.

Am I needlessly over-complicating this?

-Karl




Karl Nack

Futurity, Inc.
773-506-2007



- Original Message -
From: [EMAIL PROTECTED]
Sent: Tue, October 21, 2008 7:31
Subject:Re: [GENERAL] How best to implement a multi-table constraint?


Why do you need to store the total area at all (property_area)? This
value can easily be calculated with an group by query.

On Mon, Oct 20, 2008 at 10:56 PM, Karl Nack <[EMAIL PROTECTED]> wrote:
> Hello all,
>
> I'm a bit of a newb designing a database to hold landcover information for
> properties in a city. Here's some simple sample data:
>
> property:
> property_name*, property_area
> -
> sample house, 2500
>
>
> property_landcover:
> property_name*, landcover_name*, landcover_area
> ---
> sample house, building, 1000
> sample house, grass, 1000
> sample house, concrete, 500
>
>
> Now, I need to check that the sum of landcover_area for a property matches
> the property_area.
>
> It seems like I have three obvious options:
>
> 1. A constraint trigger that sums up landcover area and compares it to the
> property area.
>
> Downside: The trigger will run for every row that's updated in these two
> tables, although it only needs to run once for each property.
>
>
> 2. A statement-level trigger that does the same thing as #1.
>
> Downside: Since I don't have access to the updated rows, I'll have to
> check the entire property table against the entire property_landcover
> table. It seems like this could get expensive if either of these tables
> gets very large.
>
>
> 3. Use a 3rd table to hold the total landcover area for each property. Use
> row-level triggers to keep this 3rd table updated. Use a statement-level
> trigger (or table constraint) to ensure the total landcover area matches
> the property area.
>
> Downside: Although I avoid redundant checks, my understanding is that
> UPDATE is a fairly expensive operation, so it might not actually perform
> any better.
>
>
> Although my tables are small right now, they may potentially have to hold
> an entire city's worth of properties, so I'm interested in finding a
> solution that scales.
>
> Can anyone offer some feedback or suggestions on which of these options to
> use? Or perhaps even another solution that hasn't occurred to me?
>
> Thanks!
>
> -Karl
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

- End of original message -

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] After each row trigger NOT seeing data changes?

2009-05-21 Thread Karl Nack
I'm attempting to create an inventory of trees. Here's a simplified, 
sample table:


CREATE TABLE tree (
tree_id SERIAL PRIMARY KEY,
tree_species_id INT NOT NULL REFERENCES tree_species,
tree_location   POINT NOT NULL,
tree_install_date   DATE NOT NULL,
tree_removal_date   DATE,
CHECK (tree_removal_date > tree_install_date)
);


I need to ensure that no two trees are located in the same place at the 
same time:


CREATE OR REPLACE FUNCTION check_unique_tree()
RETURNS trigger
AS $$
DECLARE
num_trees INT;
BEGIN
-- just to see what's going on
SELECT COUNT(tree_id) INTO num_trees FROM tree;
RAISE NOTICE '% % of new tree %, there are % trees.',
TG_WHEN, TG_OP, NEW, num_trees;

PERFORM tree_id
FROM tree
WHERE
-- first condition prevents updated tree from matching with itself
NEW.tree_id <> tree_id
AND NEW.tree_location ~= tree_location
AND NEW.tree_install_date <
COALESCE(tree_removal_date, timestamp 'infinity')
AND COALESCE(NEW.tree_removal_date, timestamp 'infinity') >
tree_install_date;

IF FOUND THEN
RAISE EXCEPTION 'Conflicting trees';
END IF;

RETURN NULL;
END;
$$ LANGUAGE plpgsql STABLE;


CREATE TRIGGER check_unique_tree
AFTER INSERT OR UPDATE ON tree
FOR EACH ROW EXECUTE PROCEDURE check_unique_tree();


And yet, I'm able to do this:

=> INSERT INTO tree (tree_species_id, tree_location, tree_install_date)
-> VALUES
-> (1, '(1,1)', 'today'),
-> (1, '(1,1)', 'today');
NOTICE:  AFTER INSERT of new tree (20,1,"(1,1)",2009-05-21,), there are 0 
trees.
NOTICE:  AFTER INSERT of new tree (21,1,"(1,1)",2009-05-21,), there are 0 
trees.

INSERT 0 2


As a sanity check (on a fresh, truncated table):

=> INSERT INTO tree (tree_species_id, tree_location, tree_install_date)
-> VALUES (1, '(1,1)', 'today');
NOTICE:  AFTER INSERT of new tree (22,1,"(1,1)",2009-05-21,), there are 0 
trees.

INSERT 0 1

=> INSERT INTO tree (tree_species_id, tree_location, tree_install_date)
-> VALUES (1, '(1,1)', 'today');
NOTICE:  AFTER INSERT of new tree (23,1,"(1,1)",2009-05-21,), there are 1 
trees.

ERROR:  Conflicting trees


I notice the row count does not reflect the newly-inserted row, which 
suggests that the trigger is not seeing changes made to the table. This 
seems to be exactly opposite of what's in the manual:

http://www.postgresql.org/docs/8.3/interactive/trigger-datachanges.html
http://www.postgresql.org/docs/8.3/interactive/trigger-example.html

Am I doing something wrong here? Have I misunderstood the manual? Have I 
found a bug? Any help is greatly appreciated, as this check is pretty key 
to what I'm trying to do.


Thanks.

Karl Nack

Futurity, Inc.
773-506-2007

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] After each row trigger NOT seeing data changes?

2009-05-21 Thread Karl Nack

The reason is that you've declared the function STABLE


Yes, that did it!


I'm not sure if the cited portions of the manual ought to contain notes
about this or not.  It seems a bit off-topic for them, but if other
people have been bit by this, then maybe ... comments anyone?


Perhaps this should be documented on the page describing CREATE FUNCTION 
(which is what I referenced when I wrote the function)?


In particular, the wording describing IMMUTABLE, STABLE VOLATILE doesn't 
seem entirely accurate:


"STABLE ... is the appropriate selection for functions whose results 
depend on database lookups, parameter variables (such as the current time 
zone), etc."


Apparently not the case for after-update triggers that need to reference 
the just-updated table.


Regardless, thank you very much for the help!


Karl Nack

Futurity, Inc.
773-506-2007



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general