Re: [GENERAL] How to intelligently work with views that depend on other views

2015-08-09 Thread Berend Tober

Melvin Davidson wrote:

The best solution, IMHO, is don't create views that depend on other
views. ...

Much better to just make each view a stand alone.


Seconding Mr. Davidson's advice.

But, given that you are in the situation, here is a workable alternative:


Matthew Wilson


On Thu, Aug 6, 2015 at 4:44 PM, W. Matthew Wilson mailto:m...@tplus1.com>> wrote:

I have a bunch of views, and some views use data from other views.

...
Several times now, as I got further into the project, I've changed how
I make some views and I've had to redefine not just that view, but all
the ones that depend on it.




1. Dump the data base:


  pg_dump mydatabase -U postgres -Fc > mydatabase.dump


2. Create a list of all data base objects from the dump file


  pg_restore -l mydatabase.dump > mydatabase.list


3. Edit the list file and delete all rows except those for your views A, 
B, and C. Make sure you leave the lines corresponding to those views in 
the order in which they appear in the list file, as the following step 
will then have commands in the right dependency order.



4. Generate a SQL command file, based on the dump and the edited list file:


   pg_restore -C -L mydatabase.list mydatabase.dump > sql


5. Edit your view definitions in the SQL command file.


6. Run the SQL command file:


  psql mydatabase -U postgres -f sql






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-25 Thread Berend Tober

Melvin Davidson wrote:

9.
 >1) What happens if someone mis-types the account-id?
 > To correct that, you also need to correct the FK field in the
other dozen tables.
 >2) What happens when your company starts a new project (or buys a
I would not consider the general use of natural primary keys to be
best practice.
Let's assume your account_id field is used as a foreign key in a
dozen other tables.
1) What happens if someone mis-types the account-id?
  To correct that, you also need to correct the FK field in the
other dozen tables.



... ON UPDATE CASCADE ?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using a VIEW as a temporary mechanism for renaming a table

2016-06-08 Thread Berend Tober

Ben Buckman wrote:

Hello,
I would like to rename a table with ~35k rows (on pgsql 9.4), let's say
from `oldthings` to `newthings`.
Our application is actively reading from and writing to this table, and
the code will break if the table name suddenly changes at runtime. So I
can't simply run an `ALTER TABLE oldthings RENAME TO newthings`, unless
we take downtime, which we'd prefer not to do. (I'd also prefer to avoid
a data migration from one table to another, which would require
dual-writes or some other way to handle data written during the transition.)

It seems that a reasonable approach to do this without downtime, would
be to use a temporary VIEW. We can `CREATE VIEW newthings AS SELECT *
FROM oldthings;`. Views in pg9.4 that are backed by a single table
support writes. So my plan is like this:

1. Create the view, essentially as an alias to the table.
2. In the code, change all references from the old name to the new name.
The code would "think" it's using a renamed table, but would really be
using a view.
   (At this point, I expect that all basic CRUD operations on the view
should behave as if they were on the table, and that the added
performance impact would be negligible.)
3. In a transaction, drop the view and rename the table, so `newthings`
is now the original table and `oldthings` no longer exists. (In my
testing, this operation took <10ms.)
   (When this is done, the view will have only existed and been used by
the application for a few minutes.)

What are people's thoughts on this approach? Is there a flaw or
potential danger that I should be aware of? Is there a simpler approach
I should consider instead?


I would totally do it this way ... and after creating the view, I'd 
probably leave it as the normal interface. In fact, I've adopted a 
practice of utilizing views as the user interface generally and not 
exposing the actual tables at all.


As you may realize, but I'll point out for completeness, that for more 
complicated situations (i.e, when the view is not just representing a 
single table as your current case), if the view represents a multi-table 
join, you can use triggers to intercept DML on the view and implement 
logic to interact with the multiple underlying tables for inserts and 
updates.


Additionally, if you don't want to modify the application, consider 
creating the view, using the same original table name but in a separate 
schema and setting the search_path so the the view is found before the 
table. Then you can rename the table, simultaneously redefining the view 
to point the the new table.



-- B





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running query without trigger?

2016-07-09 Thread Berend Tober

haman...@t-online.de wrote:


Hi,

a table is associated with a trigger for normal use.
An admin (someone with ALTER privilege) can disable tthe trigger, run some bulk 
update,
and then re-enable it. This means, however, that  normal user activity has to 
be locked out.

There are two possible scenarios: the bulk update would not cause trigger 
activity at all,
because of the values and columns involved.
or - the bulk update is followed by another bulk transaction that  is 
equivalent to trigger
invocations per row.
At least in the first case, running this particular query without triggering 
the trigger,
but normal activity still going on, would be really great
Is there a way to achieve this?



For UPDATE events, it is possible to specify a list of columns using this 
syntax:

UPDATE OF column_name1 [, column_name2 ... ]

The trigger will only fire if at least one of the listed columns is mentioned as a target of the 
UPDATE command.



So, if you use that optional syntax, then since as you described it, the columns in your update 
statement would not be included in the "OF" list and hence not fire this update trigger. Meanwhile, 
normal updates, presumably which did involve columns in the "OF" list, would appropriately fire the 
trigger as required.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MediaWiki + PostgreSQL is not ready for production?

2016-07-19 Thread Berend Tober

John R Pierce wrote:

On 7/18/2016 9:14 PM, Tatsuo Ishii wrote:

I found following comment for using PostgreSQL with MediaWiki:

https://www.mediawiki.org/wiki/Compatibility#Database

"Anything other than MySQL or MariaDB is not recommended for
production use at this point."

This is a sad and disappointed statement for us. Should we help
MediaWiki community to enhance this?


the classic problem with any of these sorts of open source projects, while you 
can convert the core
system to postgres, there's a huge community of contributed plugins, and many 
of these authors have
zero interest in anything but their default database, mysql/mariadb.   I ran 
into this with Drupal,
Wordpress, a couple different forum projects. Drupal even tried to offer a 
database API so plugin
developers wouldn't touch SQL directly, but too many ignored it.






The quoted sentence is really more a statement on the Mediawiki product, NOT of Postgresql. The 
preceding half of the sentence makes that clear: "Support for any other database software ranges 
from dubious to stable..." It is not saying Postgresql is not ready for production, but that 
Mediawiki is not ready for production on a Postgresql.


It's a wiki page. Could you edit the page and make the second sentence in that paragraph more 
explicit about what, exactly, it is that is not ready for production? Clearly, there is plenty of 
evidence that Postgresql is way ready from production use.


In fact, now that page is fixed: "Running MediaWiki on anything other than MySQL or MariaDB is not 
recommended for production use at this point."



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Invoice Table Design

2016-11-29 Thread Berend Tober

rob wrote:

Hi Rich,

thanks for the response -- going from Mongo to Postgres does require the
kind of approach you suggest.

I suppose my question was a little bit more along the lines if anyone has
experience with designing payment / invoicing systems and any caveats they
may have encountered along the way.

I have stumbled upon  table inheritance
   available in
9.6, which I think might be helpful.



The closest thing I have done to what you are describing is to create invoices that had multiple 
component inputs, i.e., a section for time and labor, and a section for other direct expenses such 
as materials and travel. This system employed a model something like the three-table outline you 
suggested earlier where there is a top-level table that defines invoices and supplies a primary key, 
and then the different types of component elements like the labor, materials, and expense  charges 
each appear in different tables specific to the nature of those charges with a foreign key reference 
to the invoice table. The invoice table primary key then provide a means to combine all the 
accumulated charges together for final invoice presentation. I did not use table inheritance.


I looked at table inheritance years ago and ended-up running into some show-stopping problem related 
to indexes. Review this thread for context:



https://www.postgresql.org/message-id/flat/60243.216.238.112.88.1102080278.squirrel%40216.238.112.88#60243.216.238.112.88.1102080278.squirrel@216.238.112.88


I have not followed whether table inheritance has changed, however, so there might be something new 
others can help with. Me personally, I stick with the earlier sound advice of Melvin Davidson and 
Rich Shepard and normalize.


Another caveat would relate to thinking about the response to invoices. Normally (or maybe naively), 
one would expect to issue an invoice and then receive a timely payment. In reality, one may or may 
not receive a timely payment. There might be end up being multiple invoices outstanding. You may 
have to re-issue an invoice. And if you get a payment, it might be a partial payment, or a payment 
that applies to more than one invoice. So you might want to think about a way to track and  apply 
payments and partial payments to one or more invoices.


-- B







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Means to emulate global temporary table

2017-01-12 Thread Berend Tober

Karsten Hilbert wrote:

On Wed, Jan 11, 2017 at 05:54:11PM -0700, David G. Johnston wrote:


I don't see where "call a setup function immediately after connecting"


Sounds like a "login trigger", more generally an ON CONNECT
event trigger, which we don't have at the moment as far as I
know.

One of the main arguments against it was that a failing
trigger function might prevent all access while the suggested
solution to that (I think by Tom Lane) was to auto-disable ON
CONNECT triggers when starting up as --single.



I'll take some credit for a partial solution suggestion:


https://www.postgresql.org/message-id/47D7DACD.9000304%40ct.metrocast.net


and pass most of the credit back to you for a better one:


https://www.postgresql.org/message-id/20080312140850.GA3882%40merkur.hilbert.loc



While the notion of an ON CONNECT trigger seems interesting, the rest of that discussion thread had 
a lot of good points about challenges to successfully implementing this idea.


-- B




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Deletion Challenge

2015-12-08 Thread Berend Tober

/*

Deletion Challenge

I want to delete all but the most recent transaction, per person, from a
table that records a transaction history because at some point the
transaction history grows large enough to adversely effect performance,
and also becomes less relevant for retention.

I have devised a way to accomplish this, but it is a 'two-stage'
approach: that is, it requires two delete statements. I would like to
know if there is a way to do it in a single statement.

Bonus challenge: Same question, except preserving the most recent N, for
N > 1, rows for each person so that a short history is retained after
the deletion.

I have included below an annotated test case and my current solution for
the N = 1 case.

*/

DROP TABLE IF EXISTS cash_journal;


CREATE TABLE cash_journal (
click bigint NOT NULL,
cash_journal_id bigint NOT NULL,
fairian_id bigint NOT NULL,
debit double precision,
credit double precision,
balance real DEFAULT 0,
description text
);

COMMENT ON COLUMN cash_journal.clickIS 'Time of transaction.';
COMMENT ON COLUMN cash_journal.cash_journal_id  IS 'Sequence of transaction 
within current click.';
COMMENT ON COLUMN cash_journal.fairian_id   IS 'Fairian account effected.';
COMMENT ON COLUMN cash_journal.debitIS 'Account balance increase 
amount.';
COMMENT ON COLUMN cash_journal.credit   IS 'Account balance decrease 
amount.';
COMMENT ON COLUMN cash_journal.balance  IS 'Account balance, per 
Fairian running total.';
COMMENT ON COLUMN cash_journal.description  IS 'Transaction description.';

/*

Below is some sample data, listed in the click/sequence order that the
data would actually be entered. That is, the 'click' column represents
advancing time, and within each click, transactions are sequenced by the
'cash_journal_id' column. Note there are some missing cash_journal_id
sequence numbers. This is an artifact of having presented here only
an illustrative sample. Generally, within each click, the sequence
would start at one and increment uniformly by one for each new row
in the same click, and then reset to one for the next click. The
missing increments in the sample data should not make any difference
in the solution.

The 'balance' column is a per-player running total, which is a
deliberate denormalization. It is calculated in a before insert trigger
by starting with the per-player previous balance, and then adding
the new row debit, if any, and subtracting the new row credit, if any.

Note, not all Fairians will have a transaction in every click, but any
number of Fairians may have multiple transactions in any click.

*/

copy cash_journal 
(click,cash_journal_id,fairian_id,debit,credit,balance,description) from stdin;
36  3   7   0   0   0   Initial cash balance
36  4   8   0   0   0   Initial cash balance
36  5   9   0   0   0   Initial cash balance
36  14  18  0   0   0   initial cash balance
37  5   7   9   \N  9   Ratified contract fa35e192121eab
37  7   8   8   \N  8   Ratified contract f1abd670358e03
37  9   9   7   \N  7   Ratified contract 1574bddb75c78a
411 1   25  0   0   0   Initial cash balance
411 2   25  1000\N  1000Issued bond 7719a1c782a1ba
412 1   7   5   \N  14  Sold food quantity 7 units.
412 2   25  \N  5   995 Bought food quantity 7 units.
413 1   25  \N  995 0   Redeemed bond 7719a1c782a1ba
\.


SELECT * FROM cash_journal order by fairian_id, click, cash_journal_id;

/*

The sample starting data is shown here in order by Fairian so that it is
perhaps easier to see what is happening for each player. Note that the
result of the deletion should be the last row for each player.

 click | cash_journal_id | fairian_id | debit | credit | balance |   
description
---+-++---++-+--
36 |   3 |  7 | 0 |  0 |   0 | Initial cash 
balance
37 |   5 |  7 | 9 ||   9 | Ratified 
contract fa35e192121eab
   412 |   1 |  7 | 5 ||  14 | Sold food 
quantity 7 units.
36 |   4 |  8 | 0 |  0 |   0 | Initial cash 
balance
37 |   7 |  8 | 8 ||   8 | Ratified 
contract f1abd670358e03
36 |   5 |  9 | 0 |  0 |   0 | Initial cash 
balance
37 |   9 |  9 | 7 ||   7 | Ratified 
contract 1574bddb75c78a
36 |  14 | 18 | 0 |  0 |   0 | initial cash 
balance
   411 |   1 | 25 | 0 |  0 |   0 | Initial cash 
balance
   411 |

Re: [GENERAL] Deletion Challenge

2015-12-09 Thread Berend Tober

Steve Crawford wrote:

If I understand correctly the value of "click" always advances and within a 
"click" the
"cash_journal_id" always advances - not necessarily by single steps so within a 
fairian_id, ordering
by "click" plus "cash_journal_id" would return the records in order from which 
you want the most
recent 5 for each farian_id.

Typing without testing and ignoring performance optimizations, something along 
the lines of the
following should work and covers the "last 5" issue as well.

with stuff_to_delete as (
select farian_id, click, cash_journal_id,
rank() over (partition by farian_id order by (click, cash_journal_id) desc) as 
howold)
from cash_journal)
delete from cash_journal
using stuff_to_delete
where
cash_journal.farian_id = stuff_to_delete.farian_id
and cash_journal.click = stuff_to_delete.click
and cash_journal.cash_journal_id = stuff_to_delete.cash_journal_id
and stuff_to_delete.howold > 5;



Assessing without testing, I like that. Thanks!

Although the above is not the exactly the form I was using, an earlier iteration of a related 
problem employed window functions. But as the data set grew performance suffered, so if deletes were 
not done on a regular, continuing basis in order to keep the historical data set approximately 
"small", the process execution time using a windowing scheme eventually exceeded the extent of my 
patience.


That "non-scalable" situation is actually what motivated the deliberate de-normalization (of 
retaining the "running balance" in a separate column) and the desire to delete old data. The 
original implementation calculated the running balance on-the-fly, employing windowing per 
fairian_id, and those tallies of the net balance entailed increasingly lengthy execution times as 
the number of rows increased, hence I was motivated to retain only a relatively constant-sized 
per-farian history, and I dismissed the use of windowing for the delete problem since it was so 
problematic for the running-balance-without-delete problem.


Thanks for knocking some sense into me!




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deletion Challenge

2015-12-09 Thread Berend Tober

Adrian Klaver wrote:

On 12/05/2015 08:08 AM, Berend Tober wrote:

/*

Deletion Challenge

I want to delete all but the most recent transaction, per person, from a
table that records a transaction history because at some point the
transaction history grows large enough to adversely effect performance,
and also becomes less relevant for retention.

...



test=> delete from cash_journal where ARRAY[click, cash_journal_id] NOT in 
(select max(ARRAY[click,cash_journal_id]) from cash_journal group by fairian_id);
DELETE 7

test=> SELECT * FROM cash_journal order by fairian_id, click, cash_journal_id;
  click | cash_journal_id | fairian_id | debit | credit | balance |   
description
---+-++---++-+--
412 |   1 |  7 | 5 ||  14 | Sold food 
quantity 7 units.
 37 |   7 |  8 | 8 ||   8 | Ratified 
contract f1abd670358e03
 37 |   9 |  9 | 7 ||   7 | Ratified 
contract 1574bddb75c78a
 36 |  14 | 18 | 0 |  0 |   0 | initial 
cash balance
413 |   1 | 25 |   |995 |   0 | Redeemed 
bond 7719a1c782a1ba
(5 rows)



Nice.

The idea of a NOT IN query had occurred to me briefly, but I failed to pursue it because at some 
point in the distant past I had gained the impression that NOT IN queries were not computationally 
efficient. During one round of testing I had like a million rows. I'll have to run some EXPLAIN 
query testing with a larger data sample for comparison. Thanks!




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deletion Challenge

2015-12-10 Thread Berend Tober

Steve Crawford wrote:

The two general solutions are the "keep the last one" proposed by Adrian
"keep the last N" that I sent.

But it might be worth stepping back a bit. You said you are having
performance problems that you feel would be improved by removing only a
million rows which doesn't sound like that much to me. It's less than
half of what I *add* to just one of my tables every week and my database
is dwarfed by those of many of the participants on this list.

This suggests that there may be other issues such as tuning, indexing or
query optimization at play. Depending on your requirements, partitioning
might be useful. It wouldn't be last N but could easily be done to
partition by date-ranges which makes archiving and purging a low-cost
operation.

You might want to expand a bit on the core issue you are trying to solve.




I really appreciate the deep-dive.

I'm quite sure the performance issue is mostly a hardware limitation at 
this point. The application is in a developmental phase and running for 
test purposes on grossly inadequate hardware for production purposes ... 
think home laptop computer in one case.


The issue is that I'd like the application (that is, the data base and 
its stored procedures) to be robust enough to be a "long-running" 
application, i.e. one that doesn't suffer gradual performance 
degradation as time and the accumulated data increase. For the cash and 
similar journals, I'd like to retain some amount of history so that 
players can review "recent" transactions so as to understand and verify 
how the current balance was arrived at, but at some point let old 
transactions age-off and be deleted.


This question was sort of addressed at the "query tuning" aspect, and 
I'm confident that partitioning would help. But since this is just a 
game, retention of a full and auditable history is not really required: 
I have a lot of flexibility to determine what to keep and in fact am not 
exactly sure how much to keep. I know I need at least one row in order 
to retain the current balance, but I'm thinking something on the order 
of scores or hundreds, maybe a thousand transactions per player in each 
of several similar journals retained at any point in time would be 
sufficient.


This project is a game, btw, described at

https://github.com/bmtober/fairwinds

for those interested in the backstory.

I am eager to get some real-world experience with multiple players 
actually using the application and providing feedback, which is 
gradually happening by means of presentations I have and am scheduled to 
make at local user groups. Eventually I want to raise some money to rent 
some virtual private server space and host it on a publicly-available 
site when obvious scalability issues like this are mitigated.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deletion Challenge

2015-12-15 Thread Berend Tober

Jim Nasby wrote:

On 12/9/15 7:59 PM, Berend Tober wrote:

This project is a game, btw, described at


You might be interested in https://schemaverse.com/



Schemaverse looks somewhat interesting. Seems like it and Fairwinds 
share in common Postgresql as a foundation, but they are very different 
games: while they both have an aspect of conquest, one by battle, one by 
economic dominance, Fairwinds involves a different kind of strategy that 
entails a balance of cooperation and competition. Also, whereas in 
Schemaverse you get "free money" to build your space ship just by 
joining the game, a fundamental feature of Fairwinds is wealth creation 
... you have to make your own success out of nothing.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Code of Conduct

2016-01-11 Thread Berend Tober

Geoff Winkless wrote:

On 11 January 2016 at 20:13, Regina Obe  wrote:

While this is funny to some, I don't think it adds value to this conversation.  
I would consider it a derailment and not very helpful.

If I had a Coc to point at, I would point at the section  I feel you are 
violating.


If there were a CoC that would explicitly disallow occasional
lighthearted humour... I would most definitely remove myself

Your post, on the other hand, is clearly intended to censure a fellow
contributor ...


I think you missed the sarcasm. I thought your comment was great when I 
read it, and the I though her retort was even better!!





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WIP: CoC V5, etc., etc., etc., etc., ....

2016-01-19 Thread Berend Tober

Bill Moran wrote:

On Tue, 12 Jan 2016 22:10:43 -0500
Tom Lane  wrote:


Kevin Grittner  writes:

I'm not the greatest word-smith, but I'll attempt to rework Josh's
draft to something that seems more "natural" to me.


Minor (or not?) comment:



Whether or not it is a foregone conclusion that this community will 
adopt a CoC, it seems like a mailing list is not the place to do 
revision control. Can you people start a github project or something to 
develope your ideas and come back when you have something solid, please. 
This thread is creating a lot of spam.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WIP: CoC V5, etc., etc., etc., etc., ....

2016-01-19 Thread Berend Tober

Joshua D. Drake wrote:

On 01/13/2016 06:00 PM, Berend Tober wrote:


Whether or not it is a foregone conclusion that this community will
adopt a CoC, it seems like a mailing list is not the place to do
revision control. Can you people start a github project or something to
develope your ideas and come back when you have something solid, please.
This thread is creating a lot of spam.


The community discussion around this is not spam.


Yes, roger that. Bad choice of words. Surprising I didn't get CoC'd more 
severely.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A motion

2016-01-27 Thread Berend Tober

Adrian Klaver wrote:

Motion:

The Coc  discussion be moved to its own list where those who care can
argue to their hearts content and leave the rest of us to deal with
technical questions. Upon a decision on said list the result be posted
to the Postgres web site for consideration.


Been suggested already, and rejected:

http://www.postgresql.org/message-id/56970135.6060...@computer.org






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multi-Table Insert/Update Strategy - Use Functions/Procedures?

2016-01-27 Thread Berend Tober

Don Parris wrote:

I have several tables...
and want db users to be able to add or update ...
... in one step, and get all the information
into the correct tables.

I think I am ok with setting the privileges on the tables and columns as
appropriate to allow each group to select, insert and update the
appropriate data, and I can create appropriate views for them ...

Ideally, the db user just says "I want to enter Joe Public, and Joe is
affiliated with the Widget Corp entity, and has the phone numbers..."

Am I on the right track, or is there some better way to set this up?  My
understanding is that views really aren't meant for insert/update
operations, and I have seen on the web that using views to insert/update
is a bit tricky - and still requires a procedure with a rule on the view.


The way I do it for the insert case is to define an INSTEAD OF INSERT 
trigger on the view:


CREATE OR REPLACE VIEW protected.bond_ask AS
 SELECT ...
   FROM private.bond_ask
   JOIN private.order_book ON ...
 ;

CREATE OR REPLACE FUNCTION protected.bond_ask_iit()
  RETURNS trigger AS
$BODY$
BEGIN
...

  INSERT INTO private.order_book (...)
VALUES (...) RETURNING order_book_id INTO new.order_book_id;

  INSERT INTO private.bond_ask (...)
VALUES (...)
RETURNING bond_id into new.bond_id;
  RETURN NEW;
END;
$BODY$
  LANGUAGE plpgsql VOLATILE SECURITY DEFINER;

CREATE TRIGGER bond_ask_iit
  INSTEAD OF INSERT
  ON protected.bond_ask
  FOR EACH ROW
  EXECUTE PROCEDURE protected.bond_ask_iit();

And then grant insert privilege on the view.

You can probably do something similar for updates.

--B




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logger into table and/or to cli

2016-03-08 Thread Berend Tober

Thiemo Kellner, NHC Barhufpflege wrote:


I would like to log from within pgsql into table and/or to command line, 
something like
https://sourceforge.net/projects/plnsqllogger/ . Does anybody know of an open 
source solution
available?


I like "18.8.4. Using CSV-Format Log Output", although it does not log directly to a data base 
table. Rather, it produces comma-separated-value text files that you can easily copy into a data 
base table for analysis or other purposes.



See the manual at

http://www.postgresql.org/docs/9.5/static/runtime-config-logging.html





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] what database schema version management system to use?

2016-04-07 Thread Berend Tober

John R Pierce wrote:

On 4/6/2016 3:55 AM, Alexey Bashtanov wrote:

I am searching for a proper database schema version management system.



At my $job we're old school. our schemas are versioned. there's a
settings table with (setting TEXT, value TEXT) fields, a row in that is
('version', '1.0')  or whatever.

each new release of the schema is released as a .SQL file which builds
the full schema from scratch, and a .SQL file which updates the previous
version to the new version. the full build and update .sql files are
kept in our source code control along with the rest of our software.
we're quite careful about how we modify our schema so it can be done
online, update the schema on the live database, then update and restart
the application/middleware.




I would be interested in knowing specifically how the ".SQL file which 
updates the previous version to the new version" is generated. Is there 
a tool that does that based on the difference between new and old? Or is 
that update script coded by hand?






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Keyed" sequence?

2016-04-28 Thread Berend Tober

Adrian Klaver wrote:

On 04/28/2016 11:48 AM, Israel Brewster wrote:


On Apr 28, 2016, at 10:39 AM, Vik Fearing  wrote:

On 04/28/2016 08:30 PM, Israel Brewster wrote:

This is probably crazy talk, but in Postgresql is there any way to have
a "keyed" sequence? That is, I have a table with a list of departments.
While *relatively* static, technically speaking this is a dynamic
list -
someone certainly could decide to add a department, or combine two
departments into one new one, whatever. In any case, I would ideally
like to have a sequence per department, but since the list is dynamic
it's not as simple as creating 33 independent sequences and then having
a lookup table somewhere, although I guess I could implement something
of the sort with triggers.


What would be the point of this?  Why not just one sequence for all
departments?


continuity and appearance, not to mention simple logical progression.
In this case, the sequence is being used to generate a PO number.
Company style indicates that a PO number is a department code followed
by a unique number. With one sequence for all departments, you could
(will) end up with discontinuous PO numbers in any given department.
It would be nice if, after issuing PO number 15-1, the next PO in
department 15 was 2, if for no other reason than the accounting
department could easily see that they aren't missing any. With one
sequence, there will quite likely not be a PO number 2 for any given
department, so that department has no easy way to keep track of their
PO's based on PO number.


Here is a similar idea:

http://www.postgresql.org/message-id/44e376f6.7010...@seaworthysys.com





BAM!! You beat me to it!!

I have repeated that pattern multiple times and it is the exact use case 
the OP has.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-04 Thread Berend Tober

On Tuesday, May 3, 2016, drum.lu...@gmail.com  
mailto:drum.lu...@gmail.com>> wrote:

  * This is what I did...

|-- Creating the table
CREATE  TABLE  public.company_seqs
(company_id BIGINTNOT  NULL,
last_seq BIGINTNOT  NULL  DEFAULT  1000,
CONSTRAINT  company_seqs_pkPRIMARY  KEY  (company_id)
);


-- Creating the function

CREATE  OR  REPLACEFUNCTION  users_code_seq()
RETURNS"trigger"  AS
'
BEGIN
 UPDATE public.company_seqs
 SET last_seq = (last_seq + 1)
 WHERE company_id = NEW.company_id;
 SELECT INTO NEW.code last_seq
 FROM public.company_seqs WHERE company_id = 
NEW.company_id;
 END IF;
 RETURN new;
END
'

 LANGUAGE'plpgsql'  VOLATILE;

-- Creating the trigger
CREATE  TRIGGER  tf_users_code_seq
BEFOREINSERT
ON  public.users
FOR  EACHROW
EXECUTE  PROCEDURE  users_code_seq();|




1) I just may be over-sensitive to this, but after Adrian Klaver referred you to a ten-years old 
post that the above looks an awful lot similar too, it sure would be nice to see some attribution

rather than claiming it as your own with "...what *I* did..."




When inserting data:

|INSERT  INTO  
public.users(id,email,encrypted_password,sign_in_count,created_at,updated_at,code,company_id)
  VALUES  (672,'te...@test.com  
','bucefalo','0','2016-05-03 
00:01:01','2016-05-03 00:01:01',default,'1');

INSERT  INTO  
public.users(id,email,encrypted_password,sign_in_count,created_at,updated_at,code,company_id)
  VALUES  (672,'te...@test.com  
','bucefalo','0','2016-05-03 
00:01:01','2016-05-03 00:01:01','inserting my own data code column','1');|

  *

On the first query, nothing happens on the users.code column. The 
column is null.

  *

On the second query, I can see the "inserting my own data code column" 
inserted into the
code column. |This means my Trigger function is not working.. I don't 
know why.|




2) Does the public.company_seqs have any rows in it?


3) Not sure you need a separate company_seq table. Since there is a one-to-one relation between 
company and company_seqs, put the last_seq column in the company table.



-- Berend






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-04 Thread Berend Tober

David G. Johnston wrote:

On Wed, May 4, 2016 at 2:57 PM, drum.lu...@gmail.com
...

I would expect a minimum of respect from the members of this list,
but seems you got none. If someone would need my help, I'd never
insult him/her like you guys are doing.

If my questions are too "child" for you, please, do not answer them.
Ignore the emails... Isn't that simple?

Talking to me like you guys are talking, is what I call "a ten-years
old post".


​I'm not sure is this was meant to be a play on words but the original
use of the phrase meant "a post written 10 years ago" - not that you are
acting like a 10 year old.



My apologies for causing that misunderstanding. The phrase was 
definitely not meant to imply the post of a 10-year old childs 
intelligence (although I suppose it would have been impressive) ... I 
was considerably older than that when I posted that example of compound 
sequences ... 10-years ago.





FWIW I didn't read any kind of "stealing of credit" in what you wrote -
nor likely did most people.  That particular observation came out of
left field.



As I said, I might be over-sensitive on this ... being the original 
author of the copied example implementation.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-05 Thread Berend Tober

drum.lu...@gmail.com wrote:

I'm just having some problem when doing:

INSERT INTO public.users

(id,email,encrypted_password,sign_in_count,created_at,updated_at,company_id) 
VALUES
(66,'tes...@test.com ','password','0','2016-05-03 
00:01:01','2016-05-03
00:01:01','15');


- see that I'm not providing the "code" column value? If I run the query above, 
I get the following
error:

ERROR:  query returned no rows
CONTEXT:  PL/pgSQL function users_code_seq() line 7 at SQL statement


- If I include the code column with a default value:

INSERT INTO public.users

(id,email,encrypted_password,sign_in_count,created_at,updated_at,code,company_id)
 VALUES
(4,'te...@test.com ','password','0','2016-05-03 
00:01:01','2016-05-03
00:01:01',default,'2');

I get the same error

- Please, if anyone can help with that.. I'd appreciate it.

*The final function code is:*

CREATE OR REPLACE FUNCTION users_code_seq()
RETURNS "trigger" AS $$
DECLARE code character varying;
BEGIN
 IF NEW.code IS NULL THEN
 SELECT client_code_increment INTO STRICT NEW.code FROM 
public.companies WHERE id =
NEW.id ORDER BY client_code_increment DESC;




I am pretty sure the above line is wrong. NEW.id refers to users.id, not the companies.id. Also, the 
implementation presents a potential race condition, e.g., if two different sessions attempt an 
insert almost simultaneously.





 END IF;
 IF (TG_OP = 'INSERT') THEN
 UPDATE public.companies SET client_code_increment = 
(client_code_increment + 1) WHERE
id = NEW.id;



Ditto w.r.t. NEW.id.



 END IF;
 RETURN NEW;
END;
$$ LANGUAGE plpgsql;





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-05 Thread Berend Tober

David G. Johnston wrote:


​Berend already identified the problem for you.



Thank you.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-06 Thread Berend Tober

drum.lu...@gmail.com wrote:

It's working now...

Final code:

ALTER TABLE public.companies ADD COLUMN client_code_increment integer;
ALTER TABLE public.companies ALTER COLUMN client_code_increment SET NOT 
NULL;
ALTER TABLE public.companies ALTER COLUMN client_code_increment SET DEFAULT 
1000;
COMMIT TRANSACTION;

BEGIN;
-- Creating the function
CREATE OR REPLACE FUNCTION users_code_seq()
RETURNS "trigger" AS $$
DECLARE code character varying;
BEGIN
-- if it's an insert, then we update the client_code_increment column value 
to +1
 IF (TG_OP = 'INSERT') THEN  ...



Think about it, will (TG_OP = 'INSERT') ever be untrue.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I aggregate data from multiple rows into a delimited list?

2007-07-03 Thread Berend Tober

D. Dante Lorenso wrote:
I want to select several rows of data and have them returned in a 
single record with the rows joined by a delimiter. 



Review the User Comments at

"http://www.postgresql.org/docs/8.0/interactive/sql-createaggregate.html";

for some ideas.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] How to individually list the DDL for all individual data base objects

2014-11-24 Thread Berend Tober
Is there a good way to individually list the DDL for all individual data 
base objects?


Running a data base dump like:

 pg_dump mydatabase  > mydatabase-database.sql

produces one big file with all the DDL and DML to restore the data base, 
which is very convenient for most cases.


Using that I have sometimes cut-and-pasted out of it the DDL for 
individual items that I want to work on, and then fed that back in with


psql mydatabase < newstuff.sql


What I would like, though, is a set of output files, one for each 
DDL/DML item currently represented in the one big file.


I could maybe attempt some convoluted automated parsing of the big file 
with, for example, Perl, but that seems, well, convoluted, error-prone, 
and probably fragile.


The directory dump output option for pg_dump is similar to, but not 
really, what I want (as far as I see the individual files that produces 
are only the ones for reloading data, so correct me if I am wrong ... 
and please show me how to do it right!)


I have played around with the custom format dump followed by pg_restore 
and various options, but did not get what I wanted, at least not as 
elegantly as I wanted.


What I have come up with is fairly simple in appearance, but the way it 
works, by reading one line-at-a-time from the list file associated with 
the dump file, and then running pg_restore with just that one line, 
rinse and repeat for each piece of DDL/DML, also seems convoluted and 
potentially fragile.



Something along the lines of (... if anyone thinks this a good idea, or 
good starting point ...):



grep -v '^;' listfile | while read a b c n
do
  a=${a/;}
  echo $a > f
  pg_restore -L f -f outputdir/$a dumpfile
done

This, as it is, creates a set of files named according to the id number 
that pg_dump uses to identify each element. Ideally, I would like the 
files named after the schema+object it represents.


Thanks for your help!


---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to individually list the DDL for all individual data base objects

2014-11-24 Thread Berend Tober

François Beausoleil wrote:


Le 2014-11-24 à 10:14, Berend Tober  a écrit :


Is there a good way to individually list the DDL for all individual data base 
objects?






Were you aware that pg_restore can restore to STDOUT, and output DDL for only a 
single named object from a custom dump file?


Yes sir, thank you. That is what the script does.

But that little script seems fragilly-dependent upon the format of the 
list file and runs (starts a new process for) pg_restore for every 
individual line in the list file. ... which seems like poor practise, 
generally. Additionally, I'd like stability in the output file names, 
rather than the (likely changing) internal, automatically 
numerically-named items.


And, btw, I want not just a single table.

The dependance issue is potentially a problem, but (as I believe) it 
looks like the output of pg_restore in a list file is in the correct 
order to process dependencies, so I could walk backwards of forwards 
through that if changes to one object were dependent on, or caused 
cascading effects in other objects.






$ pg_restore —help
…

   -f, --file=FILENAME  output file name
…
   -t, --table=NAME restore named table
…

Such that you could run:

$ pg_restore -f public.mytable.sql -t public.mytable whatever.pgdump

Unfortunately, this does not respect dependencies and you may have issues. The 
--disable-triggers option can alleviate some of these problems. YMMV.

Hope that helps!
François




---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to individually list the DDL for all individual data base objects

2014-11-24 Thread Berend Tober

Adrian Klaver wrote:

On 11/24/2014 08:12 AM, Berend Tober wrote:

François Beausoleil wrote:


Le 2014-11-24 à 10:14, Berend Tober  a écrit :


Is there a good way to individually list the DDL for all individual
data base objects?






Were you aware that pg_restore can restore to STDOUT, and output DDL
for only a single named object from a custom dump file?


 

The dependance issue is potentially a problem, but (as I believe) it
looks like the output of pg_restore in a list file is in the correct
order to process dependencies, so I could walk backwards of forwards
through that if changes to one object were dependent on, or caused
cascading effects in other objects.


At this point I have to ask:

What is the problem you are trying to solve?

Your last paragraph seems to be reinventing the -Fc TOC, so I am not
sure where you going with this?



On the small scale, I wanted to have an individual file listing the DDL 
for each data base object in the case that I need to modify the object 
... I could start with the script that created it as a basis for 
modifications, like for views and functions, etc. In the larger scale, I 
was thinking I would like to check in all of the individual modules to 
revision control, retaining a fine-grained control, rather than 
submitting the one big comprehensive file.





---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to individually list the DDL for all individual data base objects

2014-11-24 Thread Berend Tober

Melvin Davidson wrote:

You should probably look at the pg_extractor utility.

https://github.com/omniti-labs/pg_extractor

With it, you can dump individual or selected objects to separate
directories.



That looks like what I'm looking for. (Note: I did Google searching, but 
apparently did not use the right set of search terms, because this tool 
really is described as being what it is that I am looking to do!)


Thanks!



---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to insert into 2 tables from a view?

2014-12-23 Thread Berend Tober

Chris Hoover wrote:> Correct sql:
> BEGIN;
>
> CREATE TABLE table1 (
>table1_id SERIAL PRIMARY KEY,
>table1_field1 TEXT
> );
>
> CREATE TABLE table2 (
>table1_id INTEGER NOT NULL PRIMARY KEY REFERENCES table1(table1_id)
> ON DELETE CASCADE,
>table2_field1 TEXT
> );
>
> CREATE VIEW orig_table AS
>  SELECT table1_id, table1_field1, table2_field1
>FROM table1
>JOIN table2 USING (table1_id);
>
> CREATE FUNCTION orig_table_insert(in_table1_id integer, in_table1_field1
> text, in_table2_field1 text)
>  RETURNS SETOF orig_table
>  LANGUAGE plpgsql
>  AS
>  $BODY$
>  DECLARE
>  v_table1_id table1.table1_id%TYPE;
>  BEGIN
>  INSERT INTO table1 (
>  table1_id, table1_field1
>  ) VALUES (
>  COALESCE(in_table1_id, NEXTVAL('table1_table1_id_seq')),
> in_table1_field1
>  )
>  RETURNING table1_id
>  INTO v_table1_id;
>
>  INSERT INTO table2 (
>  table1_id, table2_field1
>  ) VALUES (
>  v_table1_id, in_table2_field1
>  );
>
>  RETURN QUERY SELECT table1_id, table1_field1, table2_field1
> FROM orig_table
>WHERE table1_id = v_table1_id;
>
>  END;
>  $BODY$;
>
> CREATE RULE orig_table_insert_rule AS
>  ON INSERT
>  TO orig_table
>  DO INSTEAD
> SELECT orig_table_insert(NEW.table1_id, NEW.table1_field1,
> NEW.table2_field1);
>
> COMMIT;
>
> Problem query:
> insert into orig_table (table1_field1, table2_field1) values ('field1',
> 'field2') returning table1_id;
>
>
> On Tue, Dec 23, 2014 at 1:46 PM, Chris Hoover  > wrote:
>
> Hi,
>
> I am having a problem trying to figure out.
>
> I have two tables behind a view and am trying to figure out how to
> create the correct insert rule so that inserting into the view is
> redirected to the two tables.  I thought I had is solved using a
> stored procedure, but doing an insert into view ... returning id
> causes the insert to fail with this error:
>


CREATE TABLE table1 (
  table1_id SERIAL PRIMARY KEY,
  table1_field1 TEXT
);

CREATE TABLE table2 (
  table1_id INTEGER NOT NULL PRIMARY KEY REFERENCES table1(table1_id) 
ON DELETE CASCADE,

  table2_field1 TEXT
);

CREATE VIEW orig_table AS
SELECT table1_id, table1_field1, table2_field1
  FROM table1
  JOIN table2 USING (table1_id);


CREATE RULE orig_table_insert_rule AS
ON INSERT
TO orig_table
DO INSTEAD
(
INSERT INTO table1 (table1_field1) VALUES (NEW.table1_field1);
INSERT INTO table2 (table1_id, table2_field1) VALUES 
(CURRVAL('table1_table1_id_seq'), new.table2_field1);

);

COMMIT;


INSERT INTO orig_table (table1_field1, table2_field1) VALUES ('The value 
for table 1, field 1', 'The value for table 2, field1');

SELECT * FROM table1;
SELECT * FROM table2;
SELECT * FROM orig_table;



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Simple Atomic Relationship Insert

2015-01-13 Thread Berend Tober

John McKown wrote:

On Tue, Jan 13, 2015 at 10:42 AM, Robert DiFalco
mailto:robert.difa...@gmail.com>>wrote:

Let's say I have two tables like this (I'm leaving stuff out for
simplicity):

CREATE SEQUENCE HOMETOWN_SEQ_GEN START 1 INCREMENT 1;
CREATE TABLE hometowns (
   id INTEGER DEFAULT nextval('HOMETOWN_SEQ_GEN'),
   name VARCHAR,
   PRIMARY KEY (id),
   UNIQUE(name)
);

CREATE SEQUENCE USER_SEQ_GEN START 1 INCREMENT 1;
CREATE TABLE users (
   id  BIGINT DEFAULT nextval('USER_SEQ_GEN'),
   hometown_id INTEGER,
   nameVARCHAR NOT NULL,
   PRIMARY KEY (id),
   FOREIGN KEY (hometown_id) REFERENCES hometowns(id)
);

The hometowns table is populate as users are created.  For example,
a client may submit {"name":"Robert", "hometown":"Portland"}.

The hometowns table will never be updated, only either queries or
inserted.

So given this I need to INSERT a row into "users" and either SELECT
the hometowns.id  that matches "Portland" or if
it doesn't exist I INSERT it returning the hometowns.id
".

Normally I would do by first doing a SELECT on hometown. If I don't
get anything I do an INSERT into hometown RETURNING the id. If THAT
throws an error then I do the SELECT again. Now I'm finally ready to
INSERT into users using the hometowns.id  from
the above steps.

But wow, that seems like a lot of code for a simple "Add if doesn't
exist" foreign key relationship -- but this is how I've always done.

So my question. Is there a simpler, more performant, or thread-safe
way to do this?

Thanks!


​What occurs to me is to simply do an INSERT into the "hometowns" table
and just ignore the "already exists" return indication. Then do a SELECT
to get the hometowns​ id which now exists, then INSERT the users. but I
could easily be overlooking some reason why this wouldn't work properly.


And one more approach:

CREATE SEQUENCE HOMETOWN_SEQ_GEN START 1 INCREMENT 1;
CREATE TABLE hometowns (
  id INTEGER DEFAULT nextval('HOMETOWN_SEQ_GEN'),
  name VARCHAR,
  PRIMARY KEY (id),
  UNIQUE(name)
);

CREATE SEQUENCE USER_SEQ_GEN START 1 INCREMENT 1;
CREATE TABLE users (
  id  BIGINT DEFAULT nextval('USER_SEQ_GEN'),
  hometown_id INTEGER,
  nameVARCHAR NOT NULL,
  PRIMARY KEY (id),
  FOREIGN KEY (hometown_id) REFERENCES hometowns(id)
);

create or replace view user_town as
  select
users.name as username,
hometowns.name as hometownname
  from users
  join hometowns
on hometowns.id = users.hometown_id;

create rule user_town_exists as on insert to user_town
  where exists(select id from hometowns where (hometowns.name = 
new.hometownname))

  do
insert into users (name, hometown_id)
  values (new.username, (select id from hometowns where 
(hometowns.name = new.hometownname)));


create rule user_town_not_exists as on insert to user_town
  where not exists(select id from hometowns where (hometowns.name = 
new.hometownname))

  do (
insert into hometowns (name) values (new.hometownname);
insert into users (name, hometown_id)
  values (new.username, (select id from hometowns where 
(hometowns.name = new.hometownname)));

  );

create rule user_town_nothing as on insert to user_town
  do instead nothing;


---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can pg_restore produce create or replace commands

2015-01-17 Thread Berend Tober
I often work with the output of pg_restore from a custom format dump 
file. For example a file produced by running


pg_restore -s -1 -L listfile dumpfile

where listfile has been edited to comment out most of the rows to leave 
only the data base objects I'm currently interested in.


Most often, I'm refactoring functions and so don't really want to drop 
the function but rather want to do a "create or replace function" 
operation to implement the changes. Consequently I have to frequently do 
a global search and replace along the lines of



sed -ie 's/CREATE FUNCTION/CREATE OR REPLACE FUNCTION/'


I am not seeing in the documentation an option to generate the script 
with anything but straight "create function" commands.


Is there a way for me to access this functionality (i.e., to generate 
"create or replace function" scripts) from the command line?


I suppose I could pipe the pg_restore output through the sed command 
just as a matter of standard operating procedure, but the capability 
must exist because that is the way the scripts appear in pgadmin. I 
generally do not use the GUI tool and so would like it to happen 
automatically when using the command line tools.


---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can pg_restore produce create or replace commands

2015-01-17 Thread Berend Tober

Adrian Klaver wrote:

On 01/17/2015 10:05 AM, Berend Tober wrote:

I often work with the output of pg_restore from a custom format dump
file...

Most often, I'm refactoring functions and so don't really want to drop
the function but rather want to do a "create or replace function"...

I am not seeing in the documentation an option to generate the script
with anything but straight "create function" commands.

Is there a way for me to access this functionality (i.e., to generate
"create or replace function" scripts) from the command line?


Not that I know of. Though it should be noted that what you can do with
CREATE OR REPLACE depends a good deal on what constitutes refactoring.
...
"To replace the current definition of an existing function, use CREATE
OR REPLACE FUNCTION. It is not possible to change the name or argument
types...



Indeed. I have run into that occasionally. But currently and most often 
it has not been much problem as the refactoring is generally internal to 
the function behavior ... in fact most of them are trigger functions, 
and since I have adopted a consistent naming convention there are 
practically never function interface changes.




My solution to this is using scripts for objects and keeping them under
version control. Lately I have been using Sqitch(sqitch.org/). There is
a learning curve, but I am finding it useful.


Oh sqitch is (looks to be) awesome and I wish so much to employ it, but 
I have not had success it getting it installed. I do use Wheeler's 
companion tool pgtap, and THAT is totally awesome as well and is making 
life SO much better. I cannot envision ever again doing data base 
development without it!


I just wonder how PgAdmin creates the scripts, then. Is that a feature 
specific to the PgAdmin application then rather than the underlying 
system and tools?




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can pg_restore produce create or replace commands

2015-01-17 Thread Berend Tober

Adrian Klaver wrote:

On 01/17/2015 10:05 AM, Berend Tober wrote:

I often work with the output of pg_restore from a custom format dump
file. ...

Most often, I'm refactoring functions and so don't really want to drop
the function but rather want to do a "create or replace function"...


Not sure how pgAdmin does it. Just remembered something though,
pg_get_functiondef(), available in 8.4+:...

test=# SELECT pg_get_functiondef('ean_substr'::regproc);
...



That has potential. However, in some instances the object I'm 
refactoring will end up having dependencies, for instance for the case 
of views oftentimes I will have to do the drop/create for it and all 
dependent objects. In those situations, generating scripts from the 
pg_restore output is very convenient, since it tells me all the 
dependencies and I can adjust the listfile contents to get them all and 
in the correct order.


At this point I'm thinking to wrap the pg_restore invocation in a script 
that includes piping through sed to transform the create statements.


I would have thought that the functionality in PgAdmin and the command 
line utilities would rely on the same underlying source code and so 
support the same functionality. Alas.


---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can pg_restore produce create or replace commands

2015-01-17 Thread Berend Tober

Thomas Kellerer wrote:

Berend Tober wrote on 17.01.2015 19:05:

I often work with the output of pg_restore from a custom format dump
file. ...

Most often, I'm refactoring functions and so don't really want to
drop the function but rather want to do a "create or replace
function" ...



To me this sounds as if you are doing it the wrong way round.



Possibly. But if the revision control system and the production data 
base disagree, then which one do you believe?





To manage (refactor) your functions, you should have the current code
stored
in a version control system, update the code there an then apply it to the
target database.

Extracting the code from the database in order to do refactoring is
like disassembling a program each time you want to apply a bugfix.

The code in the vcs would then contain the necessary "create or replace"
(btw you still need to drop the function if you change the parameters)





---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to hide stored procedure's bodies from specific user

2015-02-14 Thread Berend Tober

Saimon Lim wrote:

Thanks for your help

I want to restrict some postgres users as much as possible and allow
them to execute a few my own stored procedures only.


Create the function that you want restrict access to in a separate 
'private' schema to which usage is not granted.


Create the functions you wish to allow access to in a schema to which 
the role is granted access to.


You original question was different, i.e., you were asking about hiding 
your clever algorithms from inquisitive inspection. For that, similarly 
use as 'private' schema where you keep you super-secret stuff, and then 
provide a sanitized interface in the 'public' schema:



CREATE OR REPLACE FUNCTION private.average(a float, b float)
 RETURNS float
 LANGUAGE sql
AS $$
SELECT ($1 + $2)/2.;
$$;


CREATE OR REPLACE FUNCTION public.average(a float, b float)
RETURNS float
 LANGUAGE sql
as $$
select private.average(a,b)
$$
security definer;




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to hide stored procedure's bodies from specific user

2015-02-14 Thread Berend Tober

Guillaume Lelarge wrote:

2015-02-14 14:07 GMT+01:00 Berend Tober mailto:bto...@broadstripe.net>>:

Saimon Lim wrote:

Thanks for your help

I want to restrict some postgres users as much as possible and allow
them to execute a few my own stored procedures only.


Create the function that you want restrict access to in a separate
'private' schema to which usage is not granted.

Create the functions you wish to allow access to in a schema to
which the role is granted access to.  ...

Unless I misunderstood something, this doesn't protect at all the
function source code. You can still get it by reading pg_proc.



Agreed, but he already knows about that and how to mitigate.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server SSL key with passphrase

2017-02-09 Thread Berend Tober

dhanuj hippie wrote:


I have a postgres-9.6 server running with SSL enabled, and I have setup the 
certificates as per
documentation. But currently the key file is not protected by passphrase. Does 
postgres provide a
way to use passphrase protected keys ?



If by "per documentation" you refer to "18.9.3. Creating a Self-signed Certificate", that process 
creates password protected key initially, and then there is a specific step in that process for 
removing the password. If you omit that password removal step, then you would have a password 
protected key. Note, though, as the documentation further points out, someone will have to be 
standing by at the key board whenever the server is started so as to be able to respond to the 
password prompt. You may indeed want that, but it is not recommended practice.



-- B




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question regarding pgsql-general mailing list.

2017-06-25 Thread Berend Tober

FYI, Postgresql caps for sale on ebay at http://www.ebay.com/itm/162564660418




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to drop column from interrelated views

2017-07-08 Thread Berend Tober

Guyren Howe wrote:

I’ve a set of interrelated views. I want to drop a column from a table and from 
all the views that
cascade from it.

I’ve gone to the leaf dependencies and removed the field from them. But I can’t 
remove the field
from the intermediate views because Postgres doesn’t appear to be clever enough 
to see that the
leafs no longer depend on the column. Or did I just miss one?

In general, this seems like a major weakness expressing a model in Postgres (I 
get that any such
weakness derives from SQL; that doesn’t stop me wanting a solution).

Thoughts? Comments?


This usually involves a pg_dump in the custom format, editing the list file, creating a script with 
pg_restore.


I described a way I have had success with it at one point at


https://www.postgresql.org/message-id/55C3F0B4.5010600%40computer.org



-- B




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to drop column from interrelated views

2017-07-09 Thread Berend Tober

Guyren Howe wrote:

On Jul 8, 2017, at 16:11 , Berend Tober mailto:bto...@computer.org>>
wrote:


Guyren Howe wrote:

I’ve a set of interrelated views. I want to drop a column from a table and from 
all the views
that cascade from it.

I’ve gone to the leaf dependencies and removed the field from them. But I can’t 
remove the
field from the intermediate views because Postgres doesn’t appear to be clever 
enough to see
that the leafs no longer depend on the column. Or did I just miss one?

In general, this seems like a major weakness expressing a model in Postgres (I 
get that any
such weakness derives from SQL; that doesn’t stop me wanting a solution).

Thoughts? Comments?


This usually involves a pg_dump in the custom format, editing the list file, 
creating a script
with pg_restore.

I described a way I have had success with it at one point at


https://www.postgresql.org/message-id/55C3F0B4.5010600%40computer.org


I was wondering if I changed up all the things that interrelate in a 
transaction, whether that
would bundle them up so they’re all correct afterward. I was hoping so.



Well, nothing beats empirical evidence ... set up a test case and try it!

You definitely want to do it in a transaction anyway, so that if you get it wrong the first few 
times and have to iterate, the data base rolls back to where you started.


Note the method suggested in the earlier link appears to have a error. Step 4 
should be


  pg_restore -c -1  -L mydatabase.list mydatabase.dump > sql


The lower case "c" flag will include DROP statements for the views. The "1" will wrap in a 
transaction, like you want.



BTW, please generally use the "reply-all" functionality of your email client when interacting with 
this list ... the server puts the list alias in the CC line, so you have to reply all to keep the 
conversation publicly available for others to learn from.


-- B




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is exactly a schema?

2017-07-14 Thread Berend Tober

marcelo wrote:

The question is not trivial. Could I maintain two or three separate/distinct 
"versions" of same
database using one schema for every of them?
Could some tables (in the public schema) be shared among all the schemas?




Yes and yes. In the Postgresql world, the word "schema" is maybe unfortunately overloaded, but 
whenever you read it think "namespace". In fact, in the systems catalog there are columns named 
"namespace" that store data referring to named schemas.


-- B




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting a daily puzzle record - which type of column to add?

2017-09-18 Thread Berend Tober

Alexander Farber wrote:

Good evening,

In a word game I store all player moves in the table:

CREATE TYPE words_action AS ENUM (
'play',
'skip',
'swap',
'resign',
'ban',
'expire'
);

CREATE TABLE words_moves (
mid BIGSERIAL PRIMARY KEY,
action  words_action NOT NULL,
gid integer NOT NULL REFERENCES words_games ON DELETE CASCADE,
uid integer NOT NULL REFERENCES words_users ON DELETE CASCADE,
played  timestamptz NOT NULL,
tiles   jsonb,
score   integer CHECK(score >= 0)
);

I could run a cron job on all moves played each day and select the 
"spectacular" ones by it,
i.e. when a very big score has been achieved in the move or all 7 tiles have 
been played...

Then I (as admin of the game) would manually review the daily mails sent by 
that cronjob and
select the few I have found interesting - for later publishing them as "daily 
puzzle" in my day.

However I don't want to do the reviewing every day as that would be tedious, 
but more like once
per week and then select several such moves at once (for the future dates).

My question (and thanks for reading my mail sofar) is: which column would you 
add to the
words_moves table for such a purpose?

If I add a boolean (is a daily puzzle move: true or false) - then it is 
difficult to keep the
order of the daily puzzles, I think.

If I add a timestamptz, then to which date to set it, when I do my manual 
review once a week?

I need to add a useful column, so that it would be easy to me to create a web 
script which would
display today's and all past "daily puzzle" records - and wouldn't change the 
already published
puzzles...

If you have a good idea here, please share with me. If not, sorry for the maybe 
offtopic
question.


I like the idea of a new column in words_games that allows nulls and to be filled in subsequently 
with the review date, but here's another idea to consider:


If you have another place to store control information, you could store the mid value of the 
last-reviewed  words_moves table row. That serial column also keeps track of the order, btw.


Or maybe you define another table capturing more detail, if you need it, such as

CREATE TABLE spectacular_moves (
mid BIGINTEGER REFERENCES words_games,
review_date  timestamptz NOT NULL,
publication_date timestamptz /*NULL allowed ... date to be filled in 
subsequently */,
);

Or those last two columns could both be appended to the word_games table, again, allowing NULL, but 
then filled in as the events occur.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best way to allow column to initially be null?

2017-09-30 Thread Berend Tober

Glen Huang wrote:
> I’m trying to make a column have these properties:
>
> 1. When a row is inserted, this column is allowed to be null. 2. When the row 
is updated, no null
> can be assigned to it this column.
>
> I initially thought I can drop the not null constraint before insertion and 
turn it back on after
> that, but after reading the doc it seems turning on not null constraint 
requires not columns
> contain null value, so looks like it won’t work.
>
> My current approach is to not set the not null constraint in the table and 
use a before update
> trigger to manually raise exception when the column is null. But it doesn’t 
seem as elegant.
>
> Is there a better way?
>

Sounds to me like a BEFORE UPDATE trigger is exactly the way to handle this. Rejecting invalid data 
input values is an ideal use case for such a facility.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Client Authentication methods

2017-11-10 Thread Berend Tober

chiru r wrote:

Hi All,

I am trying to understand the Authentication method in pg_hba.conf file (password 
& md5) in
PostgreSQL database server.

I am assuming that the user provides the  connection string 
host/usser/password,then client will go
and contact the DB server pg_hba.conf file in memory without carrying password 
over the network
initially, and then it confirms the authentication method from pg_hba.conf 
,then it decides weather
it send clear text or md5 encrypted password from client to Server to make a 
session?

Is my assumption is correct ? or What exactly it make the difference for client 
if i use
md5/password  in pg_hba.conf file in DB server?.




Your assumptions sound consistent with documentation appearing at


https://www.postgresql.org/docs/10/static/protocol-flow.html

https://www.postgresql.org/docs/10/static/auth-methods.html

-- B




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger to run @ connection time?

2008-03-12 Thread Berend Tober

Alban Hertroys wrote:

On Mar 11, 2008, at 10:28 PM, Tom Lane wrote:


An ON CONNECT trigger enforced by the database seems a bit scary to me.
If it's broken, how you gonna get into the DB to fix it?

regards, tom lane


If creating the trigger wouldn't be possible from within the database 
that it's defined for (which would be strange anyway as far as I'm 
concerned, since you are already connected at that point and thus missed 
an opportunity to fire that trigger) this shouldn't be a problem.


To put that into an SQL statement, something like:
#template1 $ CREATE TRIGGER my_database_connect_trg BEFORE CONNECT ON 
my_database EXECUTE PROCEDURE my_database_setup()


Although of course that begs the question where that procedure would be 
stored; Rather not in template1, I suppose! This points to another 
problem with ON CONNECT triggers, you'll likely need to be connected to 
reach the stored procedure that the trigger calls! A nice chicken and 
egg problem, with some scope issues on the horizon...


I envision this not so much as a BEFORE connect trigger, but 
rather as an event that happens after the point of the user being 
successfully authenticated, but before executing any user 
application commands -- in fact before even starting to listen 
for any incoming application commands.


A particular implementation I see this useful for, to give some 
context to thinking about this, is to update a user password 
expiration date (to, say, CURRENT_DATE + 30) at each login. This 
would then allow the creation of a system that lets unused 
accounts expire but automatically maintains the validity of 
actively used accounts, for example. I can think of other uses, too.


I currently achieve this functionality with an event triggered in 
an end-user application, but I'd really like it to happen in the 
data base so that every application that access this data base 
doesn't have to recreate that particular functionality -- and 
also so as to eliminate to problem of the functionality not being 
implemented by other applications outside our control that access 
the data base.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary Key with serial

2008-03-29 Thread Berend Tober

x asasaxax wrote:

   I have the following tablecreate table product(cod serial, user_cod
bigint, constraint product_fk Foreign Key(user_cod) references user(cod),
constraint product_pk Primary Key(cod, user_cod));

What i want to happend is that:
user_codcod
1 1
1 2
1 3
2 1
3 1
3 2


> Can serial do that? ...

No.

> ...what can  i do to make this happen?


http://archives.postgresql.org/pgsql-general/2006-08/msg00744.php



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Call for design: PostgreSQL mugs

2013-09-06 Thread Berend Tober

Peter Geoghegan wrote:

On Thu, Sep 5, 2013 at 8:22 AM, Merlin Moncure  wrote:

I'm still partial to this guy:

http://www.commandprompt.com/images/mammoth_versus_dolphin_500.jpg



I dislike that image, and always have. ...


I agree with Mr. Geoghegan.

That image should be eradicated from the portfolio of PostgreSQL 
professionals.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Whole record returned in stead of field

2013-10-03 Thread Berend Tober

Johann Spies wrote:

SELECT  A.article_id, A publication_year


You are missing a "." between in what you think is the second 
column (after the "A").



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] create a script which imports csv data

2012-06-28 Thread Berend Tober

Robert Buckley wrote:

I have to create a script which imports csv data into postgresql
The csv data is automatically created from an external database
so I have no influence over which columns etc are downloaded.

How can I best create a table for the import?



This is what I do:

1) I have a table defined for import which reflects the CSV 
structure of the data to be imported, i.e., since I know what the 
fields are on each line of the CSV, this template table has 
columns defined to accommodate each known field. This table never 
actually gets data written to it.


2) To import data, my script creates a temporary import table 
LIKE the import template table.


3) The script then transfers and transform the data from the 
temporary import table to another permanent table that has the 
structure, including a primary key, that is more useful for my 
purposes. (It omits some of the columns which I do not really 
need from the CSV, uses a different name for one column, and adds 
some reference information. You could do calculations here as well.)


4) The temporary import table is deleted at the end of the import 
session.


Here is a sanitized (names changed to protect the innocent) 
version of the script (the script parameter '$1' is the name of 
the CSV file):


#!/bin/bash

# This script imports a CSV file of transactions from Discover.

#!/bin/bash

# This script imports a CSV file of transactions from Discover.

psql mydb <<-_END-OF-SCRIPT_
CREATE LOCAL TEMPORARY TABLE i (LIKE 
my_financial_schema.import_discover_card);

COPY i
(transaction_date, post_date, description, amount, category, 
share, net, type, paid_date)

FROM '$1'
WITH (FORMAT CSV, DELIMITER ',', QUOTE '"');
INSERT INTO my_financial_schema.my_permanent_record_table(
transaction_date,
paid_date,
reference,
category,
amount,
description
)
SELECT
  transaction_date,
  paid_date,
  'Discover Card',
  type,
  net,
  description
  FROM i;
DROP TABLE i;
_END-OF-SCRIPT_


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sequence moves forward when failover is triggerred

2012-07-11 Thread Berend Tober

Craig Ringer wrote:

On 07/11/2012 07:23 AM, Andy Chambers wrote:


I think I made a poor decision by having our application
generate checkbook numbers on demand using sequences.


Sure did. Sequences are exempt from most transactional rules;
that's why they're fast and lock-free.



This may be another case for the "gap-less" sequence (I hate that 
term ... please let's call it a "uniformly-increasing sequence").


http://archives.postgresql.org/pgsql-general/2006-08/msg00744.php





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to ;ist all table foreign key dependency relationships

2012-07-22 Thread Berend Tober
I am interested in listing all the pairwise foreign key dependencies 
between tables by name, i.e., if I have table A with some primary key 
column A.id, and table B has a foreign key referencing A.id, then table 
B depends on A.


I found some things by Google searching, but most of these were not 
straightforward SQL. I found one approach that utilized a Ruby script, 
for instance, and that approach seemed too complicated.


I envision defining a view that will produce the results ... so I am 
aiming for pure SQL. I played around with how I might do this a little 
bit (see below), but figured maybe someone already has it, or there is 
some information_schema or pg_catalog entity that I did not notice. (I 
did see a some tables/views in pg_catalog and information_schema that 
look like they might provide the basic information but I was not sure 
that exactly what I want is there ... if so, please point it out!)


My preliminary attempt was to capture the output of the following 
command to get the SQL statements used to list dependencies for a 
particular table "public.city" in a data base named "home":


psql -E -c'\d+ public.city' home


Messing around with that output, I came up with the following query to 
list all the dependencies defined by foreign keys:


SELECT
n1.nspname AS primary_key_ns,
c1.relname AS primary_key_table,
n2.nspname AS foreign_key_ns,
c2.relname AS foreign_key_table
FROM pg_catalog.pg_constraint c
JOIN ONLY pg_catalog.pg_class c1 ON c1.oid = c.confrelid
JOIN ONLY pg_catalog.pg_class c2 ON c2.oid = c.conrelid
JOIN ONLY pg_catalog.pg_namespace n1 ON n1.oid = c1.relnamespace
JOIN ONLY pg_catalog.pg_namespace n2 ON n2.oid = c2.relnamespace
WHERE c1.relkind = 'r' AND c.contype = 'f'
ORDER BY 1,2,3,4;

I was hoping mailing list participants could review this to make sure it 
is giving me what I expect (as described initially) ... and suggest 
improvements.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] obtain the difference between successive rows

2012-10-20 Thread Berend Tober

Thalis Kalfigkopoulos wrote:

On Sat, Oct 20, 2012 at 8:02 AM, Raymond O'Donnell  wrote:

On 20/10/2012 11:54, ochaussavoine wrote:

I have a table 'tmvt' with a field 'created' in the row, and would like to
compute the difference between successive rows. The solution I found is:


I think you can do it with a window function.



In particular you're looking probably for the lag() window function.


What about if there is more than one column you want the 
difference for (... coincidentally I am writing a article on this 
topic right now! ...), say a table which is used to record a 
metered quantity at not-quite regular intervals:


CREATE TABLE electricity
(
  current_reading_date date,
  current_meter_reading integer
);


with sample data:


'2012-09-07',2158
'2012-10-05',3018



and I want an output such as:


Meter Read on October 5

Current  Previous  kWh
Reading  Reading  Used
---
3018   -2158   =860

Number service days = 28


I am working on a write-up of a neat solution using CTE's, but 
would be interested in other's views.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] obtain the difference between successive rows

2012-10-20 Thread Berend Tober

Thalis Kalfigkopoulos wrote:

SELECT current_meter_reading - lag(current_meter_reading) OVER(ORDER
BY current_reading_date) AS kWh_diff, extract('days' FROM
current_reading_date - lag(current_reading_date) OVER(ORDER BY
current_reading_date)) as num_service_days FROM mytable;


How would you get the previous reading (and perhaps the previous 
read date) to also appear on the same output row? The sample 
table with the subtraction I showed for illustration is literally 
what is printed on the bill ... they are not just presenting the 
quantity used and the number of days, but actually the dates and 
meter readings used to do the arithmetic.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] obtain the difference between successive rows

2012-10-20 Thread Berend Tober

Raymond O'Donnell wrote:

On 20/10/2012 17:02, Berend Tober wrote:

Thalis Kalfigkopoulos wrote:
How would you get the previous reading (and perhaps the previous read
date) to also appear ...


Just include them in the SELECT:




Well, that is surprisingly easy!

How about this then: the table includes data for more than one 
meter. I moved (I've been keeping this data for two decades ... 
yes, I know...) to a new house, and in the new house, the utility 
company has replaced the meter (one of those "smart" meters). So 
the table has a foreign key reference to the primary key 
identifying the meter:


CREATE TABLE electricity
(
  electric_meter_pk integer,
  current_reading_date date,
  current_meter_reading integer
);

with sample data:

 2 | 1997-04-14   |0
 2 | 1997-05-08   |  573
 2 | 1997-06-12   | 1709
 ...
 2 | 2009-09-14   |152941
 3 | 2009-06-26   | 68502
 3 | 2009-08-13   | 69738
...
 3 | 2012-07-06   |118953
 3 | 2012-07-18   |119185
 4 | 2012-07-18   | 0
 4 | 2012-08-06   |   887
 4 | 2012-09-07   |  2158
 4 | 2012-10-05   |  3018


Your suggestion almost worked as is for this, except that you 
have to note that reading for meter #2 and meter #3 overlap (I 
briefly owned two houses), and that seemed to confuse the lag() 
function:


SELECT
  electric_meter_pk,
  lag(reading_date)
 OVER(ORDER BY reading_date) as prev_date,
  reading_date,
  lag(meter_reading)
 OVER(ORDER BY reading_date) AS prev_reading,
  meter_reading,
  meter_reading - lag(meter_reading)
 OVER(ORDER BY reading_date) AS kWh_diff,
  reading_date - lag(reading_date)
 OVER(ORDER BY reading_date) as num_service_days
FROM electric
order by 1,3;

 2 | 2009-04-09 | 2009-05-11 | 145595 | 146774 |  1179 |32
 2 | 2009-05-11 | 2009-06-10 | 146774 | 148139 |  1365 |30
 2 | 2009-06-26 | 2009-07-14 |  68502 | 149808 | 81306 |18
 2 | 2009-07-14 | 2009-08-12 | 149808 | 151584 |  1776 |29
 2 | 2009-09-12 | 2009-09-14 |  70934 | 152941 | 82007 | 2
 3 | 2009-06-10 | 2009-06-26 | 148139 |  68502 |-79637 |16
 3 | 2009-08-12 | 2009-08-13 | 151584 |  69738 |-81846 | 1
 3 | 2009-08-13 | 2009-09-12 |  69738 |  70934 |  1196 |30
 3 | 2009-09-14 | 2009-10-14 | 152941 |  71918 |-81023 |30
 3 | 2009-10-14 | 2009-11-11 |  71918 |  72952 |  1034 |28




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] obtain the difference between successive rows

2012-10-20 Thread Berend Tober

Berend Tober wrote:

Raymond O'Donnell wrote:

On 20/10/2012 17:02, Berend Tober wrote:

Thalis Kalfigkopoulos wrote:
How would you get the previous reading (and perhaps the
previous read
date) to also appear ...


Just include them in the SELECT:


Well, that is surprisingly easy!

How about this then: the table includes data for more than one
meter



Almost answering my own question. Adding the meter key to the lag:

SELECT
  electric_meter_pk,
  lag(reading_date)
 OVER(ORDER BY electric_meter_pk,reading_date)
 as prev_date,
  reading_date,
  lag(meter_reading)
 OVER(ORDER BY electric_meter_pk,reading_date)
 AS prev_reading,
  meter_reading,
  meter_reading - lag(meter_reading)
 OVER(ORDER BY electric_meter_pk,reading_date)
 AS kWh_diff,
  reading_date - lag(reading_date)
 OVER(ORDER BY electric_meter_pk,reading_date)
 as num_service_days
FROM home.electric
order by 1,3;

Gives all good as far as lining up dates, except it does not 
cross the new-meter boundary gracefully:


 2 | 2009-07-14 | 2009-08-12 | 149808 | 151584 |1776 | 29
 2 | 2009-08-12 | 2009-09-14 | 151584 | 152941 |1357 | 33
*3 | 2009-09-14 | 2009-06-26 | 152941 |  68502 |  -84439 |-80
 3 | 2009-06-26 | 2009-08-13 |  68502 |  69738 |1236 | 48
 3 | 2009-08-13 | 2009-09-12 |  69738 |  70934 |1196 | 30
 ...
 3 | 2012-05-04 | 2012-06-07 | 116091 | 117469 |1378 | 34
 3 | 2012-06-07 | 2012-07-06 | 117469 | 118953 |1484 | 29
 3 | 2012-07-06 | 2012-07-18 | 118953 | 119185 | 232 | 12
*4 | 2012-07-18 | 2012-07-18 | 119185 |  0 | -119185 |  0
 4 | 2012-07-18 | 2012-08-06 |  0 |887 | 887 | 19
 4 | 2012-08-06 | 2012-09-07 |887 |   2158 |1271 | 32
 4 | 2012-09-07 | 2012-10-05 |   2158 |   3018 | 860 | 28


The first-row-initialization problem is what lead me to consider 
a recursive CTE. I have something that works and does not use 
window functions, but I think it requires more detailed 
explanation than I have prepared at this time.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] obtain the difference between successive rows

2012-10-20 Thread Berend Tober

Виктор Егоров wrote:

2012/10/20 Berend Tober :

Your suggestion almost worked as is for this, except that you have to note
that reading for meter #2 and meter #3 overlap ...

You can do
  … OVER(PARTITION BY electric_meter_pk ORDER BY reading_date)
to split you data by meter.




That looks like it works great! Much simpler-looking SQL than what I was 
working on.


Thanks!


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] obtain the difference between successive rows

2012-10-20 Thread Berend Tober

Thalis Kalfigkopoulos wrote:

On Sat, Oct 20, 2012 at 8:02 AM, Raymond O'Donnell  wrote:

On 20/10/2012 11:54, ochaussavoine wrote:

I have a table 'tmvt' with a field 'created' in the row, and would like to
compute the difference between successive rows. The solution I found is:


I think you can do it with a window function.



In particular you're looking probably for the lag() window function.


What about if there is more than one column you want the 
difference for (... coincidentally I am writing a article on this 
topic right now! ...), say a table which is used to record a 
metered quantity at not-quite regular intervals:


CREATE TABLE electricity
(
  current_reading_date date,
  current_meter_reading integer
);


with sample data:


'2012-09-07',2158
'2012-10-05',3018



and I want an output such as:


Meter Read on October 5

Current  Previous  kWh
Reading  Reading  Used
---
3018   -2158   =860

Number service days = 28


I am working on a write-up of a neat solution using CTE's, but 
would be interested in other's views.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] New Zealand Postgis DBA job vacancy

2012-12-26 Thread Berend Tober

pcr...@pcreso.com wrote:

Mat be of interest to someone here...

http://careers.eroad.co.nz/vacancies/showVacancy/11

Brent Wood




I attended a technical conference in the US recently, and someone 
posted an Auckland job flyer (not this same organization, 
though). He included a statement regarding New Zealand expatriate 
work visas and something about government bias against more 
experienced workers. He did not phrase it like that literally, of 
course, but rather emphasized that it was notably easier for 
foreigners under age 30 to get permission to work. Can anyone 
with direct experience comment on this government-sanctioned 
discrimination?





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] New Zealand Postgis DBA job vacancy

2012-12-26 Thread Berend Tober

Gavin Flower wrote:

On 27/12/12 07:54, Berend Tober wrote:

...regarding New
Zealand expatriate work visas and ... that it
was notably easier for foreigners under age 30 to get
permission to work. Can anyone with direct experience comment
on this government-sanctioned discrimination?


I think it might be that they want healthy workers who likely
will pay lots of tax before requiring to much medical treatment -
most people use the public health care system in New Zealand, so
it as assumed you will not be relying on private medical insurance.


That would be an obvious-enough inference to draw.

I guess I was hoping for more practical, direct insight, such as 
"Despite how skilled and a productive worker you might be, don't 
bother applying if you are anywhere near middle-age, or if you 
are likely to become middle-aged." There was this interesting 
1976 movie called "Logan's Run" about a dystopian future that 
similarly devalued the experienced.


The current job posting by Brent Wood does not specifically 
mention age requirements/restrictions, and does not specify any 
particular amount of experience, but they do in fact require 
"experience". Knowing that the visa quota system is biased 
against age, one might reasonably conclude that no one with more 
than 5 to 10 years experience would be able to qualify because of 
the government restrictions.


It might have been polite of the advertising organization to make 
that clear, as did the guy that posted the flyer at the technical 
conference.


I wonder, if an applicant were within the government-approved age 
range, but then worked long enough so as to exceed the limits, 
would their work visa suddenly be withdrawn, having contributed 
to the tax base and maybe established a family there, then be 
forced to vacate the premise simply because they aged out?






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] creating "a perfect sequence" column

2008-07-06 Thread Berend Tober

Jack Brown wrote:

Dear list,

I need some tips and/or pointers to relevant documentation implementing (what I chose to 
call) "a perfect sequence" i.e. a sequence that has no missing numbers in the 
sequence. I'd like it to auto increment on insert, and auto decrement everything bigger 
than its value on delete. There are many mechanisms (rules, triggers, sequences, locks 
etc.) but I'm not sure which combination would result in the most elegant implementation.

Oh, and if you know the right term for what I just described, I'd be more than 
pleased to hear it! :-)



This question comes up a lot. A term used in prior discussions is 
"gapless sequence".


What would be really more interesting for discussion on this 
community forum is a detailed description or your actual use case 
and requirements.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Advice on implementing counters in postgreSQL

2008-08-02 Thread Berend Tober

Marco Bizzarri wrote:
> Hi all.
>
> I need to keep a numer of counters in my application; my 
counters are

> currently stored in a table:
>
> name | next_value | year
>
>
> The counters must be progressive numbers with no holes in between
> them, and they must restart from 1 every year. What I've done 
so far

> is to access them while in SERIALIZABLE ISOLATION LEVEL, with the
> following:
>
> SELECT next_value FROM counters WHERE name = 'name' for update;
> UPDATE counters SET next_value = next_value + 1 WHERE name = 
'name';

>...
> 2) while this works, it has the unfortunate behaviour to cause
> conflict between concurrent transactions; so, one of them has 
to be

> restarted and redone from scratch. Is there a way to avoid this
> behaviour? maybe with lock to tables?


The way I understand the documentation at

"http://www.postgresql.org/docs/8.3/static/transaction-iso.html";

and

'http://www.postgresql.org/docs/current/static/explicit-locking.html',

you should not have to use the serial isolation level.

I would define the counter table so as to hold the last-used 
value, rather that the "next" value, and then do the UPDATE first.


As a consequence, assuming all this happens within a transaction 
 of course, the SELECT FOR UPDATE syntax is not required either 
because the UPDATE will grab a lock on the row and block other 
updates until the transaction is finished. That is, concurrency 
is protected and you don't have to restart any transactions 
because subsequent transactions will just wait until the first 
one finishes due to nature of the lock automatically acquired by 
the initial UPDATE statement.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema Upgrade Howto

2008-10-30 Thread Berend Tober

Thomas Guettler wrote:

Hi,

is there a schema upgrade howto? I could not find much with google.

There is a running DB and a development DB. The development DB
has some tables, columns and indexes added. What is the preferred way
to upgrade?

I see these solutions:
 - pg_dump production DB. Install schema only from dev DB, restore data
only from dump.
 - Use alter table.
 - Use a tool like apgdiff (never tried it).

I guess all ways will be possible. But what do you suggest?


Use three instances of the database: development, quality 
assurance testing, and production. DEV and QAT are occasionally 
refreshed from a pg_dump of PRD. Developers work against DEV for 
modify-compile-test-(doh, I broke it)-refix-compile-test cycles. 
All structural or development-related changes required to the 
data base are done with a SQL text file script. The script files 
are managed along with the source code in SVN. When developers 
are satisfied, the script is applied to QAT and then end-users 
test the modified application against QAT. When end-users sign 
off that they are satisfied, the same (*unmodifed from as run 
against QAT*) script is run on PRD at the same time the same 
(*unmodifed from as run against QAT*) application is deployed for 
production use.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] serial data type usage

2008-11-06 Thread Berend Tober

EXT-Rothermel, Peter M wrote:

I have a table where I would like the primary key to be generated during
the insert.

Here is a simplified example:


CREATE TABLE employee_type
{
tname varchar(10) PRIMARY KEY,
id_prefix char(1) ;
...
}

tname  | id_prefix 
--+--

worker | W
manager  | M
executive | E

CREATE TABLE employee {
id varchar(10) PRIMARY KEY,
type varchar(10) REFERENCES employee_type
...
}

When an employee of type 'worker' is inserted the id generated will have
a prefix "W" followed by a 6-digit number. W01, W02 ..
When the employee type is 'manager' the employee id is M01, M02
...
When the employee type is 'executive' the employee id is E01,
E02 ...

The sequences for each employee type are separate.

CREATE SEQUENCE worker_seqnum MINVALUE 1 MAXVALUE 99 ;
CREATE SEQUENCE manger_seqnum MINVALUE 1 MAXVALUE 99 ;
CREATE SEQUENCE executive_seqnum MINVALUE 1 MAXVALUE 99 ;


I have thought about using the serial data type for the employee.id but
I also want to automate the prepending of the { W, M, E } prefix.

Any suggestions?


Do not put the worker type and worker sequence value in the same 
column ... you're violating normal form. Keep them in separate 
columns in the employee table, drop the type column from employee 
(it's redundant given the first character of your proposed 
primary key which, as I said, should be a separate column 
anyway), define a compound primary key (id_prefix, id) on it, and 
use a view for presentation in the concatenated format you desire 
along with the redundant (but more verbose) type value by way of 
  a join. Secondly, you don't need the three separate sequences, 
but you do need to sort of roll your own based on my description 
appearing at


http://archives.postgresql.org/pgsql-general/2004-04/msg00940.php


-- BMT


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Easy form of "insert if it isn't already there"?

2012-02-15 Thread Berend Tober

Chris Angelico wrote:

On Wed, Feb 15, 2012 at 5:26 PM, Bartosz Dmytrak  wrote:


e.g. You can use BEGIN... EXCEPTION END, good example of
such approach is
there: 
http://www.postgresql.org/docs/9.1/static/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE;




I wonder why, in that example, you would not try the INSERT 
first, and if that fails, then do the update?




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] manage changes to views having depencies

2009-01-01 Thread Berend Tober

Eric Worden wrote:

Can anyone recommend a reasonably efficient system for changing a view
definition (say by adding a column) when it has a bunch of dependent
functions?

Right now I work with the output from pg_dump to recreate things after
doing "DROP VIEW ... CASCADE".  But the pg_dump schema output is only
approximately sorted by dependencies, and "create table..." is
sprinkled all through it.  That means I have to carefully comb through
and select the pieces I need.

Is there a way to just script the view definitions, then the
functions?  Or maybe I'm looking at it the wrong way?



I've run into the situation similar to yours but with views and 
foreign key dependent on tables. Same process probably would work 
for you.


I use pgAdmin III to run my desired change script, which might 
include a DELETE...CASCADE or ALTER ... command(s), wrapped 
inside a BEGIN ... ROLLBACK block (since this may require a 
number of iterations). The messages in the pgAdmin output pane 
identify dependencies specifically related to the changes you 
intend to make.


Then for each dependency, I build out my script by inserting it 
in between the DROP ... CREATE commands identified for each 
dependency in each iteration of the process until all the 
dependencies are resolved.


This is not necessarily what I would think of as ideal, since it 
is not really very well automated to the extent I'd like to see, 
but it has worked, and it has proven "efficient enough" for a 
process that is not a routine, everyday task.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] preserving data after updates

2005-03-04 Thread Berend Tober
> I use a modified form of option 3 with an ON UPDATE RULE the update rule
> copies the row to an inherited table...

I just gotta say that THAT is one COOL use of PG inheritance! Do you find that
it works well and is robust and all the good stuff it seems like would be the
case?


-- Berend


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgres mystery

2005-03-30 Thread Berend Tober
Michael Fuhr wrote:
On Wed, Mar 30, 2005 at 09:11:09AM +0200, Shaun Clements wrote:
 

Can anyone tell me what the problem is here:
I am inserting into a table via a stored procedure, to a few columns within
the table and postgres is throwing a 
CANNOT EXECUTE NULL QUERY.

EXECUTE ''INSERT INTO table (column1, column2, column3,
''||quote_ident(column4)||'') values
(''||quote_literal(RECORDNAME.column1)||'',''||quote_literal(RECORDNAME.colu
mn2)||'',stringvalue,''||quote_literal(RECORDNAME.column2)||'')'';
   

One of the operands to || is probably NULL, so the entire INSERT
string ends up being NULL.  Example:
SELECT 'abc' || 'def';
?column? 
--
abcdef
(1 row)

SELECT 'abc' || NULL;
?column? 
--

(1 row)
Looks like you need to check for NULL or use COALESCE to convert
NULL to something else.
 

That something else you ought to do appears in the documentation User 
Comments at

http://www.postgresql.org/docs/8.0/interactive/functions-conditional.html
---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Preserving data after updates

2005-05-19 Thread Berend Tober
A few months ago, a question by Scott Frankel produced a suggestion from 
Greg Patnude which I found very exciting that had to do with using pg 
table inheritance to maintain an audit or row change history table. I've 
been testing Patnude's idea and ran into a problem, described below, and 
wanted to ask about work-around suggestions.

Testing had so far entailed occasionally dumping the production data 
base, restoring to DEV, and then modifying DEV to include several 
"history" tables, using a script similar to that which I documented on 
the PG web site. So today, I tried for the first time dumping DEV after 
making the history table additions and then testing the restore from the 
dump script so produced. The restore failed.

The problem is that one of my parent tables has table constraints:
CREATE TABLE person
(
 person_pk int4 NOT NULL DEFAULT nextval('"person_person_pk_seq"'::text),
 last_name varchar(24),
 first_name varchar(24),
 middle_name varchar(24),
 e_mail_address name,
 social_security_no varchar(11),
 CONSTRAINT person_e_mail_address CHECK 
(check_pattern((e_mail_address)::character varying, 'Internet E-Mail 
Address'::character varying)),
 CONSTRAINT person_name_check CHECK (((last_name IS NOT NULL) OR 
(first_name IS NOT NULL))),
 CONSTRAINT person_social_security_no CHECK 
(check_pattern(social_security_no, 'Social Security Number'::character 
varying))
)
WITHOUT OIDS;

I create the history table with
CREATE TABLE person_change_history(
   action VARCHAR(6),
   update_date TIMESTAMP NOT NULL DEFAULT NOW(),
   update_user NAME NOT NULL DEFAULT CURRENT_USER
   ) INHERITS (person) WITHOUT OIDS;
  
CREATE RULE person_ru AS ON UPDATE TO person
   DO INSERT INTO person_change_history
   SELECT *, 'UPDATE' FROM ONLY person WHERE person_pk = old.person_pk;

CREATE RULE person_rd AS ON DELETE TO person
   DO INSERT INTO person_change_history
   SELECT *, 'DELETE' FROM ONLY person WHERE person_pk = old.person_pk;
But after doing a dump of  the modified data base, the script created by 
pg dump wants to recreate the history table as

CREATE TABLE person_change_history
(
 person_pk int4 NOT NULL DEFAULT nextval('"person_person_pk_seq"'::text),
 last_name varchar(24),
 first_name varchar(24),
 middle_name varchar(24),
 e_mail_address name,
 social_security_no varchar(11),
 "action" varchar(6),
 update_date timestamp NOT NULL DEFAULT now(),
 update_user name NOT NULL DEFAULT "current_user"(),
 CONSTRAINT person_e_mail_address CHECK 
(check_pattern((e_mail_address)::character varying, 'Internet E-Mail 
Address'::character varying)),
 CONSTRAINT person_name_check CHECK (((last_name IS NOT NULL) OR 
(first_name IS NOT NULL))),
 CONSTRAINT person_social_security_no CHECK 
(check_pattern(social_security_no, 'Social Security Number'::character 
varying))
) INHERITS (person)
WITHOUT OIDS;

When I run the script to restore the dumped, modified, data base, psql 
raises an error when creating the history table because the table 
constraints already exist"

psql:paid-5434.sql:7678: ERROR:  constraint "person_e_mail_address" 
already exists for relation "person_change_history"

Any suggestion on how to get around this problem?
I don't want to have to manually modified the pg_dump output script so 
as to delete the constraint definitions from the history table 
definition, because that sort of manual intervention really gets in the 
way of good administrative procedures for disaster recovery if this 
scheme were to be implemented in the production data base.

-- BMT
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Postgres in government

2005-05-19 Thread Berend Tober
Scott Marlowe wrote:
On Wed, 2005-05-18 at 21:24, Mark Steckel wrote:
 

...We are proposing that 
Postgres be used for the application database. Not too surprisingly we are 
being asked for additional information because Postgres is open source.
   

So is the implication that they think open source is a bad thing? I 
would think they would question a recommendation for using proprietory 
products!

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Preserving data after updates

2005-05-19 Thread Berend Tober
Tom Lane wrote:
Berend Tober <[EMAIL PROTECTED]> writes:
 

But after doing a dump of  the modified data base, the script created by 
pg dump wants to recreate the history table as
...
 CONSTRAINT person_name_check CHECK (((last_name IS NOT NULL) OR 
(first_name IS NOT NULL))),
   

Hmm, it shouldn't do that ... and in a quick test here I couldn't
reproduce any such bug.  What version of pg_dump are you using?
Sorry I failed to specify. Production version is 7.3.1 (change is 
hard!), although I origianally worked out the implementation on version 
8. I bet that is the problem.


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] preserving data after updates

2005-05-19 Thread Berend Tober
Greg Patnude wrote:
Yeah… this is where the inheritance model gets a little funky… What do 
you have “SQL_INEHERITANCE” set to when you dump the database ? I’ve 
never tested this so I don’t know if it makes a difference being on or 
off when you dump a table…. You might try it and compare the two 
versions of the DDL for your inherited tables…

I set SQL_INEHERITANCE to OFF because I have lots of existing queries in 
an application that do not include the "ONLY" option. I did try setting 
it back on the default ON, and the problem remained..

Note: postgreSQL recommends leaving SQL_INHERITANCE at “ON” and using 
the keyword “ONLY”

I’ve seen that before… The problem is that pg_dump creates the 
person_history table as a standalone table (look at the DDL) with the 
keyword “INHERITS” – My gut feeling is that this is probably a bug in 
pg_dump – I don’t think pg_dump really knows how to dump just the 
additional fields specified in an inherited table so it dumps the 
actual definition it finds in the system catalogs…

If you poke around in pg_catalog, you’ll find that the catalog 
definition is a combination of pointers to the parent table and any 
additional fields, constraints, rules, etc you defined when you 
created the inherited table.

My work-around has been to drop and recreate the history tables using 
the “original” SQL I used to create the inherited table in the first 
place…


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Preserving data after updates

2005-05-19 Thread Berend Tober
Tom Lane wrote:
The case I tested seems to work in 7.3 as well:
CREATE TABLE person (last_name varchar(24),
 first_name varchar(24),
CONSTRAINT person_name_check CHECK (((last_name IS NOT NULL) OR 
(first_name IS NOT NULL;

CREATE TABLE person_change_history(
   action VARCHAR(6),
   update_date TIMESTAMP NOT NULL DEFAULT NOW(),
   update_user NAME NOT NULL DEFAULT CURRENT_USER
   ) INHERITS (person);
pg_dump puts the CONSTRAINT only on person, as it should.  I'm testing
7.3.10 but I don't see any changes in the 7.3 CVS log that look related.
Can you put together a reproducible test case?
 

I tried a simpler example than my original, as you have, and the problem 
bahavior didn't manifest, but it still happens in my dev copy of my 
production database. The immediately obvious difference between the 
simpler example, like yours, and the actual case in which the problem 
manifests is that the problem case to of the table constraints call a 
user-defined function "check_pattern()" (which tests the column value 
against a regular expression), i.e.

CREATE OR REPLACE FUNCTION public.check_pattern("varchar", "varchar")
 RETURNS bool AS
'
DECLARE
 l_value ALIAS FOR $1;
 l_pattern ALIAS FOR $2;
 l_row RECORD;
BEGIN
 IF (l_value IS NOT NULL) AND (LENGTH(l_value) > 0) THEN
IF EXISTS(SELECT 1 FROM public.regular_expression WHERE 
UPPER(description) = UPPER(l_pattern)) THEN
  SELECT INTO l_row regular_expression, user_message FROM 
public.regular_expression WHERE UPPER(description) = UPPER(l_pattern);
  IF NOT (l_value ~ l_row.regular_expression) THEN
RAISE EXCEPTION \'Invalid %. %\', l_pattern, l_row.user_message;
  END IF;
END IF;
 END IF;
 RETURN TRUE;
END;'  LANGUAGE 'plpgsql' VOLATILE;

in the definition:
CREATE TABLE person
(
 person_pk int4 NOT NULL DEFAULT nextval('"person_person_pk_seq"'::text),
 last_name varchar(24),
 first_name varchar(24),
 middle_name varchar(24),
 e_mail_address name,
 social_security_no varchar(11),
 CONSTRAINT person_pkey PRIMARY KEY (person_pk),
 CONSTRAINT person_e_mail_address CHECK 
(check_pattern((e_mail_address)::character varying, 'Internet E-Mail 
Address'::character varying)),
 CONSTRAINT person_name_check CHECK (((last_name IS NOT NULL) OR 
(first_name IS NOT NULL))),
 CONSTRAINT person_social_security_no CHECK 
(check_pattern(social_security_no, 'Social Security Number'::character 
varying))
)
WITHOUT OIDS;


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Preserving data after updates

2005-05-19 Thread Berend Tober
Tom Lane wrote:
What do you get from
select conname, consrc from pg_catalog.pg_constraint
where contype = 'c' and conrelid = 'person'::regclass;
 

 conname  | consrc  
---+-
person_e_mail_address | public.check_pattern((e_mail_address)::character varying, 'Internet E-Mail Address'::character varying)
person_name_check | ((last_name IS NOT NULL) OR (first_name IS NOT NULL))
person_social_security_no | public.check_pattern(social_security_no, 'Social Security Number'::character varying)
(3 rows)


select conname, consrc from pg_catalog.pg_constraint
where contype = 'c' and conrelid = 'person_change_history'::regclass;
 

 conname  |  consrc  
---+--
person_social_security_no | check_pattern(social_security_no, 'Social Security Number'::character varying)
person_name_check | ((last_name IS NOT NULL) OR (first_name IS NOT NULL))
person_e_mail_address | check_pattern((e_mail_address)::character varying, 'Internet E-Mail Address'::character varying)
(3 rows)


AFAICS from looking at the 7.3 pg_dump source, it should suppress any
constraint on person_change_history that looks identical to one of the
parent table's constraints in this query.
 

Interesting. The consrc column values differ in that the explicit schema 
qualification on the function calls is missing for the descendent table. 
So, you think maybe if I remove the explicit schema qualification from 
the function calls in the constraint declarations on the person table 
that that might fix it? Yup! That does it!

Thanks for your help.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] Preserving data after updates

2005-05-19 Thread Berend Tober
Tom Lane wrote:
What do you get from
select conname, consrc from pg_catalog.pg_constraint
where contype = 'c' and conrelid = 'person'::regclass;
 

conname  
| 
consrc  
---+- 

person_e_mail_address | 
public.check_pattern((e_mail_address)::character varying, 'Internet 
E-Mail Address'::character varying)
person_name_check | ((last_name IS NOT NULL) OR (first_name IS 
NOT NULL))
person_social_security_no | public.check_pattern(social_security_no, 
'Social Security Number'::character varying)
(3 rows)


select conname, consrc from pg_catalog.pg_constraint
where contype = 'c' and conrelid = 'person_change_history'::regclass;
 

conname  |  
consrc  
---+-- 

person_social_security_no | check_pattern(social_security_no, 'Social 
Security Number'::character varying)
person_name_check | ((last_name IS NOT NULL) OR (first_name IS 
NOT NULL))
person_e_mail_address | check_pattern((e_mail_address)::character 
varying, 'Internet E-Mail Address'::character varying)
(3 rows)


AFAICS from looking at the 7.3 pg_dump source, it should suppress any
constraint on person_change_history that looks identical to one of the
parent table's constraints in this query.
 

Interesting. The consrc column values differ in that the explicit schema 
qualification on the function calls is missing for the descendent table. 
So, you think maybe if I remove the explicit schema qualification from 
the function calls in the constraint declarations on the person table 
that that might fix it?

Yup! That does it! Thanks for your help!
But now, however, when restoring from the pg_dump output the script gets 
hung up over the fact that when the CREATE TABLE statements are executed 
the raw script can't find the check_pattern function, since it is 
declared in the public schema and these application-specific tables are 
(being tried to be) declared in a different schema. That is, the pg_dump 
output has lots of

SET search_path = public, pg_catalog;
and
SET search_path = paid, pg_catalog;
statements sprinkled throughout, and when a table is declared having the 
check_pattern function call constraint after the latter statement, then 
the function can't be found. I had to manually edit the pg_dump output 
script search path statements to read

SET search_path = paid, public, pg_catalog;
in order to make this all work right. Again, too much manual editing to 
tolerate for disaster recovery and for my frequent refresh of DEV and 
QAT from PRD for development and testing purposes.

Now what, oh most wise one?

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: Inherited constraints and search paths (was Re: [GENERAL] Preserving

2005-05-20 Thread Berend Tober
Tom Lane wrote:
Berend Tober <[EMAIL PROTECTED]> writes:
 

Now what, oh most wise one?
  

OK, now I finally get the point: you are creating child tables in
different schemas than their parents live in.  This creates a problem
because reverse-listing of the constraints varies depending on what
the search path is.
 

Close but not exactly. In my case the child tables are in the same 
schema as the parent, but it is the function call referenced in the 
check constraint that lives in a different schema than the tables. 
However, as an alternative in developing this idea, I did consider the 
possibility of defining a separate schema where all the child tables 
would live so that the child tables could have the same name as the 
parent tables, since this particular implementation is such that the 
child tables represent change histories of the parent tables.

An example in CVS tip is:...
It's the same constraint, but the different reverse-listing fools
pg_dump into assuming that it's different.
At the moment I'm not seeing any really nice way to fix this.
 

If the pg_dump output produced "SET search_path" statement with the 
complete actual path required to find all objects in subsequent DDL 
statements, my world would be at peace. (But I have no idea how 
complicated it would be to implement that.)

It can be argued that we should actually prohibit dropping inherited
constraints, which'd eliminate that problem.  I seem to recall that this
has come up before and we explicitly decided against making such a
restriction ... but given that a dump/restore will cause the inherited
constraint to come back anyway, it can hardly be claimed that we really
support dropping them.
Comments anyone?
 

I like that arguement to prohibit dropping inherited constraints.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] Inherited constraints and search paths (was Re: [GENERAL]

2005-05-20 Thread Berend Tober
Simon Riggs wrote:
On Thu, 2005-05-19 at 23:27 -0400, Tom Lane wrote:
 

Berend Tober <[EMAIL PROTECTED]> writes:
   

Now what, oh most wise one?
 

OK, now I finally get the point: you are creating child tables in
different schemas than their parents live in.  
   

...
 

Comments anyone?
   

Best thing to do is to prevent people from creating child tables in
different schemas. Or at least advise against it.
Doing anything to restrict dropping of inherited constraints seems like
wasted effort and potentially annoying anyhow.
My partitioning efforts will eventually distinguish between inherited
and non-inherited constraints, since the former are fairly useless for
partition elimination. So I can't see a reason to care whether they are
there or not, if the user knows better.
 

The case in question was not one of the child table being in a different 
partition (do you mean schema?), although that arrangement was 
considered and rejected for other reasons during data base design. In 
this implementation, a function called for a table constraint was in a 
different schema. The function so called was defined in the public 
scheme because it is a generic function that can be used by different 
applications, and some tables are relevant only to specific applications 
and so have there own, application-specific schema -- but they still can 
make use of shared definitions, i.e., this particular function, which 
are defined in the public schema.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] [HACKERS] Inherited constraints and search paths (was

2005-05-20 Thread Berend Tober
Tom Lane wrote:
...
I just ran into another inheritance-related oddness. Well maybe it is 
not really an oddness -- you tell me.

The problem stems from the fact that I did not originally plan on using 
inhertiance and so did not include the ONLY keyword in the FROM clause 
of queries coded into my user interface application. To get around 
having to modify lots of queries in the application so as to include 
ONLY, I instead switched the configuration parameter SQL_INHERITANCE to 
OFF. This works fine for cases where I select from tables directly, 
i.e., the query correctly returns only the rows from the parent table.

However, when I do a select from a view, which itself does a select from 
a parent table, the query result does include the child table rows, 
i.e., the SQL_INHERITANCE  setting is ignored in this situation. Should 
the SQL_INHERITANCE  setting still rule?

TEST.SQL:
\set ON_ERROR_STOP ON
\connect - postgres
--DROP DATABASE test;
CREATE DATABASE test WITH TEMPLATE = template1;
\connect test postgres
SET search_path = public, pg_catalog;
CREATE TABLE person (
   person_pk serial NOT NULL,
   last_name character varying(24),
   first_name character varying(24),
   CONSTRAINT person_name_check CHECK (((last_name IS NOT NULL) OR (first_name 
IS NOT NULL)))
) WITHOUT OIDS;
CREATE TABLE person_change_history (
   "action" character varying(6),
   update_date timestamp without time zone DEFAULT now() NOT NULL,
   update_user name DEFAULT "current_user"() NOT NULL
)INHERITS (person) WITHOUT OIDS;
CREATE OR REPLACE RULE person_ru AS  ON UPDATE TO person DO 
   INSERT INTO person_change_history 
   SELECT *, 'UPDATE' FROM ONLY person WHERE (person.person_pk = old.person_pk ); 

/*
My views were originally created with the default SQL_INHERITANCE setting,
which results in PG not automagically inserting the ONLY keyword.
*/
SET SQL_INHERITANCE TO ON;
CREATE VIEW persons AS SELECT * FROM person;
/*
I set it to OFF so that I do not have to go back and do major 
modifications to the application.
*/
SET SQL_INHERITANCE TO OFF;

INSERT INTO person (first_name, last_name) VALUES ('Doug', 'Funny');
INSERT INTO person (first_name, last_name) VALUES ('Patty', 'Mayonaise');
SELECT * FROM person;
/*
person_pk | last_name | first_name 
---+---+
1 | Funny | Doug
2 | Mayonaise | Patty
(2 rows)
*/

SELECT * FROM person_change_history;
/*
person_pk | last_name | first_name | action | update_date | update_user 
---+---+++-+-
(0 rows)
*/

SELECT * FROM persons;
/*
person_pk | last_name | first_name 
---+---+
1 | Funny | Doug
2 | Mayonaise | Patty
(2 rows)
*/

-- A.O.K. so far.
UPDATE person SET last_name = 'Mayonnaise' WHERE last_name = 'Mayonaise';
/*
UPDATE 1
*/
SELECT * FROM person;
/*
person_pk | last_name  | first_name 
---++
1 | Funny  | Doug
2 | Mayonnaise | Patty
(2 rows)
*/
-- Still O.K.

SELECT * FROM person_change_history;
/*
person_pk | last_name | first_name | action |update_date| update_user 
---+---+++---+-
2 | Mayonaise | Patty  | UPDATE | 2005-05-20 17:10:53.81593 | postgres
(1 row)
*/
-- Still O.K.

SELECT * FROM persons;
/*
person_pk | last_name  | first_name 
---++
1 | Funny  | Doug
2 | Mayonnaise | Patty
2 | Mayonaise  | Patty
(3 rows)
*/
--Zing...ouch!


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] enebling regular user to create new users ?

2005-06-15 Thread Berend Tober

Zlatko Matić wrote:

I know that superusers are allowed to do everything on the database, 
but I consider this as dangerous. I want to  have some user group with 
rights of creating new users and giving them some authorizations, but 
without such wide power as superusers have. So,

I was thinking about two possible scenarios:
a) to allow regular users  to create new users
b) to restrict superuser's permissions
 
What is possible and what do you suggest ?


CREATE OR REPLACE FUNCTION create_user(name)
 RETURNS bool AS
'
DECLARE
 PWD VARCHAR;
 CMD VARCHAR;
BEGIN
 PWD := \'\'\'\' || get_random_string(8) || \'\'\'\';
 IF EXISTS(SELECT 1 FROM pg_user WHERE usename = $1) THEN
   RETURN FALSE;
 END IF;
 CMD := \'CREATE USER "\' || $1 || \'" WITH ENCRYPTED PASSWORD \' || 
PWD || \' IN GROUP gen_user\';

 EXECUTE CMD;
 RETURN TRUE;
END;
'
 LANGUAGE 'plpgsql' VOLATILE SECURITY DEFINER;
REVOKE ALL ON FUNCTION create_user(name) FROM public;
GRANT EXECUTE ON FUNCTION create_user(name) TO GROUP pseudo_dba;


CREATE OR REPLACE FUNCTION alter_group(name, bool, name)
 RETURNS bool AS
'
DECLARE
 l_group ALIAS FOR $1;
 l_create_user ALIAS FOR $2;
 l_username ALIAS FOR $3;
 CMD VARCHAR;
 MIN_SUPER_USER INTEGER := 1;
BEGIN
 IF (l_create_user NOTNULL)  THEN
   IF (l_create_user) THEN 
 PERFORM create_user(l_username);
 CMD := \'ALTER GROUP \' || l_group || \' ADD USER "\' || 
l_username || \'"\';

 EXECUTE CMD;
   ELSIF (l_group = \'gen_user\') THEN
 PERFORM drop_user(l_username);
   ELSE
 CMD := \'ALTER GROUP \' || l_group || \' DROP USER "\' || 
l_username || \'"\';

 EXECUTE CMD;
   END IF;
   IF (SELECT COUNT(*) FROM group_members WHERE groname = 
\'pseudo_dba\') < MIN_SUPER_USER THEN 
 RAISE EXCEPTION \'At least % super user(s) must be defined in 
order to create new user accounts.\', MIN_SUPER_USER;

   END IF;
 END IF;
 RETURN TRUE;
END;
'
 LANGUAGE 'plpgsql' VOLATILE SECURITY DEFINER;
REVOKE ALL ON FUNCTION alter_group(name, bool, name) FROM public;
GRANT EXECUTE ON FUNCTION alter_group(name, bool, name) TO GROUP pseudo_dba;

-- etc., etc., etc.,


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Backing up multiple databases

2005-06-17 Thread Berend Tober

Gavin Love wrote:



Here is the script I use for my daily backups nothing special but it 
works well. Just run it as a user with admin privs on the database. It 
will pull the list of all your databases except templates and dump 
them out.




That is pretty neat! Here is Gavin's script slighty modified with some 
extra features useful to me and maybe to you, too:


pg_backup.sh:

#!/bin/bash

# Subject:Re: [GENERAL] Backing up multiple databases
# From:   Gavin Love <[EMAIL PROTECTED]>
# Date:   Fri, 17 Jun 2005 15:52:34 +0100
# To: Jacob Atzen <[EMAIL PROTECTED]>
# CC: pgsql-general@postgresql.org
#
# Modified by Berend Tober  2005-06-17 to:
# a) include tcp port as command line parameter.
# b) include syntax help.
# c) include Postgresql version information in global.sql output file.
# d) append ".sql" file name suffix to dump output file.
# e) output to current directory.
  
SYNTAX="Usage: `basename $0` port"


if [ $# -ne 1 ]
then
   echo ${SYNTAX}
   exit 1
fi

PG_BIN=/usr/bin
OUT_DIR=.
PG_PORT=${1}
TODAY=$(date "+%Y/%m/%d")
BACKUP_DBS=`/usr/bin/psql -p ${PG_PORT} template1 -t -c "SELECT datname 
FROM pg_database WHERE datname NOT LIKE 'template_' ORDER BY datname;"`
VERSION_DBS=`/usr/bin/psql -p ${PG_PORT} template1 -t -c "SELECT '-- 
'||version();"`


mkdir -p $OUT_DIR/$TODAY

echo "Data base backup started at $(date)";

for i in $BACKUP_DBS
do
echo -n "Backing up $i"
$PG_BIN/pg_dump -p ${PG_PORT} -o -C $i > $OUT_DIR/$TODAY/$i.sql
echo -n "Compressing"
bzip2 -9 -f $OUT_DIR/$TODAY/$i.sql
echo "Done"
done

echo -n "Backing up globals"
echo $VERSION_DBS > $OUT_DIR/$TODAY/global.sql
$PG_BIN/pg_dumpall -p ${PG_PORT} -g >> $OUT_DIR/$TODAY/global.sql
echo "Done"

echo "Data base ended at $(date)";


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] chosing a database name

2005-07-13 Thread Berend Tober

Alvaro Herrera wrote:


On Wed, Jul 13, 2005 at 05:56:03PM +0200, Karsten Hilbert wrote:

 


we are developing GNUmed, a medical practice management
application running on PostgreSQL (you want your medical
data to be hosted by something reliable, don't you ;-)  We
are putting out our first release sometime in the next two
weeks.

The idea is to name the production database "gnumed0.1" for
version 0.1 (gnumed0.2 etc for upcoming releases). I do
realize the "." may force me to quote the database name in,
say, a CREATE DATABASE call.
   



I doubt you'll have any problems with the tools, but the quoting may
prove painful.  Why not replace the dot with an underscore? gnumed0_1
 



Or why bother including either? Just use sequential integers, maybe 
left-padded with zeros to make the name the same length for the first 
thousand or so releases?




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] How to create unique constraint on NULL columns

2005-07-15 Thread Berend Tober

Andrus wrote:


How to create constraint so that NULL values are treated equal and second
insert is rejected ?
 


Rethink your data design --- this behavior is required by the SQL
standard.
   



I have a table of users permissions by departments

CREATE TABLE permission (
 id serial,
 user_id CHAR(10) NOT NULL REFERENCES user,
 permission_id CHAR(10) NOT NULL  REFERENCES privilege,
 department_id CHAR(10)  REFERENCES department ,
 UNIQUE ( user_id, permission_id, department_id ) )

permission_id is a permission name:  Invoice, Waybill etc.

department _id is a code of department whose documents user is authorized to 
access.


if department _id  is NULL, user has access to all departments data.
 

By this design it is meaningless to have two records with same user_id and 
permission_id both having department_id NULL


So I want that Postgres does not allow to insert them.

How I should rethink this data design to be implemented in CREATE TABLE 
statement ?


 



"if department _id is NULL, user has access to all departments data."

This is your problem. You've assigned meaning to the "value" NULL.

CREATE TABLE permission (
 id serial,
 user_id CHAR(10) NOT NULL REFERENCES user,
 permission_id CHAR(10) NOT NULL  REFERENCES privilege,
 UNIQUE (user_id, permission_id));


CREATE TABLE permission_department (
 id serial,
 user_id CHAR(10) NOT NULL REFERENCES user,
 permission_id CHAR(10) NOT NULL  REFERENCES privilege,
 department_id CHAR(10)  REFERENCES department ,
 UNIQUE (user_id, permission_id, department_id));

Any person who is authorized to access documents of a department MUST 
have a corresponding row in permission_department: If they are 
authorized to view documents of all departments, then they must have a 
row corresponding to every department.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] How to create unique constraint on NULL columns

2005-07-15 Thread Berend Tober

Andrus wrote:


"if department _id is NULL, user has access to all departments data."

This is your problem. You've assigned meaning to the "value" NULL.

CREATE TABLE permission (
id serial,
user_id CHAR(10) NOT NULL REFERENCES user,
permission_id CHAR(10) NOT NULL  REFERENCES privilege,
UNIQUE (user_id, permission_id));


CREATE TABLE permission_department (
id serial,
user_id CHAR(10) NOT NULL REFERENCES user,
permission_id CHAR(10) NOT NULL  REFERENCES privilege,
department_id CHAR(10)  REFERENCES department ,
UNIQUE (user_id, permission_id, department_id));

Any person who is authorized to access documents of a department MUST have 
a corresponding row in permission_department: If they are authorized to 
view documents of all departments, then they must have a row corresponding 
to every department.
   


I don't understand why the permission_department table is required ?
 



I didn't include this because I thought it would be obvious: You have to 
put a unique constraint on that table so as to eliminate the possibility 
of redundant departmental permission rows, as you thought  was your 
original problem.


If user is authorized to all departments, I can add separate row for each 
department to former permission table. So the permission_department table is 
not required at all
 

Except that when abusing the meaning of NULL you can add duplicate rows 
indicating permission for all departments redundantly, which is what you 
originally misidentified as being the problem for which you sought a 
means to put a unique constraint on NULL values. As the first respondent 
said, the problem IS with the design.


Unfortunately, this approach causes loss of information: it loses the fact 
that user is allowed to
see all departments data. If new department is added, this department should 
be made accessible

for all users which have marked as "access all departments".
 

That information is not lost, but it is a little more work to get it: 
You know how many departments there are. Any user that has a count of 
departments equal to the number of existing departments is an "all 
departments" user. You can thus use aggregation to identify the "all 
departments" users and then add a row for them corresponding to the new 
department.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] help me learn

2005-09-12 Thread Berend Tober

suresh ramasamy wrote:

i'm new to postgreSQL as well as new to database concepts. please tell me 
how can i learn. i mean the easiest and fast way. Your help will be 
appreciated.




Make an appropriate posting to pgsql-jobs?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Implementing a change log

2005-09-19 Thread Berend Tober

Greg Sabino Mullane wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

 
 


My original intention was to keep two sets of tables. The first
containing only the working set of current records. The second
containing all prior versions. I haven't experimented with such a setup
yet and I'm wondering if it is even necessary. The alternative being to
keep only a single set of tables.
   

 
 


Can anyone relate their experiences with such a thing? Which approaches
should I take into consideration?
   



I like the multi-table approach; I use a schema named "audit" that contains
a copy of some of the important tables (sans constraints). The nice part is
that I can use the exact same table name, which makes things easier. A few
extra columns on each audit table track who made the change, what type it
was (insert, update, or delete [trigger event]), and the time of the change
[default timestamptz]. Throw in some triggers and you're done.

 

There was a very exciting discussion of this last May, in which Greg 
Patnude suggested the most insightful, and in hindsight obviously 
appropriate, use of table inheritance ever (IMHO). I slightly refined 
the idea and posted documentation comments at the time. See "User 
Comments" at


"http://www.postgresql.org/docs/8.0/interactive/tutorial-inheritance.html";

for something that should set you afire.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Implementing a change log

2005-09-20 Thread Berend Tober

Berend Tober wrote:


...See "User Comments" at

"http://www.postgresql.org/docs/8.0/interactive/tutorial-inheritance.html"; 



for something that should set you afire.


And, commenting on my own post, try this cool function:

/*
The following is based on suggestion by Mike Rylander posted on 
Postgresql-General
Sun, 18 Sep 2005 23:29:51 + 


Rylander's original suggestion employed a trigger and tracked
only row updates. My implementation makes use of rules and
handles both updates and deletions.
*/

\o output.txt
\set ON_ERROR_STOP OFF

DROP SCHEMA auditor CASCADE;
DROP SCHEMA test CASCADE;

\set ON_ERROR_STOP ON

-- Create a schema to contain all of our audit tables and the creator 
function

CREATE SCHEMA auditor;
  
CREATE OR REPLACE FUNCTION auditor.create_auditor(name, name)

 RETURNS bool AS
'
BEGIN
   -- This is the function that does the heavy lifting of creating 
audit tables

   -- and the triggers that will populate them.
  
   -- Create the audit table: auditor.{schema}_{table}

   EXECUTE \'
   CREATE TABLE auditor.\' || $1 || \'_\' || $2 || \' (
   update_action VARCHAR(6) NOT NULL,
   update_date TIMESTAMP NOT NULL DEFAULT NOW(),
   update_user NAME NOT NULL DEFAULT CURRENT_USER
   ) INHERITS (\' || $1 || \'.\' || $2 || \') WITHOUT OIDS;
   \';

   EXECUTE \'
   CREATE RULE \'|| $2 ||\'_ru AS ON UPDATE TO \'|| $1 ||\'.\'|| $2 
||\'

   DO INSERT INTO auditor.\'|| $1 ||\'_\'|| $2 ||\'
   SELECT OLD.*, \'\'UPDATE\'\';
   \';

   EXECUTE \'
   CREATE RULE \'|| $2 ||\'_rd AS ON DELETE TO \'|| $1 ||\'.\'|| $2 
||\'

   DO INSERT INTO auditor.\'|| $1 ||\'_\'|| $2 ||\'
   SELECT OLD.*, \'\'DELETE\'\';
   \';

   RETURN TRUE;
END;
'
 LANGUAGE 'plpgsql' VOLATILE;



/* BEGIN EXAMPLE */

CREATE SCHEMA test AUTHORIZATION postgres;

-- This option makes it unnecessary to use the "ONLY" keyword in your 
SELECT and UPDATE statements.

\set SQL_INHERITANCE TO OFF;

\set search_path = test, pg_catalog;
\set default_with_oids = false;

CREATE TABLE test.person (
   first_name character varying(24),
   last_name character varying(24),
   gender character(1),
   marital_status character(1)
) WITHOUT OIDS;

INSERT INTO test.person VALUES ('Charlie', 'Bucket', 'M', 'S');
INSERT INTO test.person VALUES ('Grandpa', 'Joe', 'M', NULL);
INSERT INTO test.person VALUES ('Veruca', 'Salt', NULL, 'S');
INSERT INTO test.person VALUES ('Augustus', 'Gloop', 'M', 'S');
INSERT INTO test.person VALUES ('Micheal', 'Teevee', 'M', 'S');
INSERT INTO test.person VALUES ('Violet', 'Beaureguard', 'M', 'S');

SELECT auditor.create_auditor('test', 'person');

UPDATE test.person set marital_status = 'M' WHERE last_name = 'Joe';
SELECT * FROM auditor.test_person;
/*
first_name | last_name | gender | marital_status | update_action 
|update_date | update_user

+---+++---++-
Grandpa| Joe   | M  || UPDATE| 
2005-09-20 03:26:23.063965 | postgres

(1 row)
*/

UPDATE test.person set first_name = 'Joe', last_name = 'Bucket' WHERE 
last_name = 'Joe';

SELECT * FROM auditor.test_person;
/*
first_name | last_name | gender | marital_status | update_action 
|update_date | update_user

+---+++---++-
Grandpa| Joe   | M  || UPDATE| 
2005-09-20 03:26:23.063965 | postgres
Grandpa| Joe   | M  | M  | UPDATE| 
2005-09-20 03:26:23.13654  | postgres

(2 rows)
*/

UPDATE test.person set gender = 'F' WHERE last_name = 'Salt';
SELECT * FROM auditor.test_person;
/*
first_name | last_name | gender | marital_status | update_action 
|update_date | update_user

+---+++---++-
Grandpa| Joe   | M  || UPDATE| 
2005-09-20 03:26:23.063965 | postgres
Grandpa| Joe   | M  | M  | UPDATE| 
2005-09-20 03:26:23.13654  | postgres
Veruca | Salt  || S  | UPDATE| 
2005-09-20 03:26:23.175714 | postgres

(3 rows)
*/

DELETE FROM test.person WHERE last_name = 'Salt';
SELECT * FROM auditor.test_person;
/*
first_name

Re: [GENERAL] Implementing a change log

2005-09-20 Thread Berend Tober

Mike Rylander wrote:


On 9/20/05, Berend Tober <[EMAIL PROTECTED]> wrote:
 


/*
The following is based on suggestion by Mike Rylander posted on
Postgresql-General
Sun, 18 Sep 2005 23:29:51 +

Rylander's original suggestion employed a trigger and tracked
only row updates. My implementation makes use of rules and
handles both updates and deletions.
*/
   



I'm glad that was inspirational ...
 



That was indeed pretty cool.


...did in fact track deletions:
 



Guess I was too excited to actually read the whole thing more closely 
once I grasped the direction you were going!!



You may want to consider using the LIKE style of table copying, as it
strips all constraints from the new table.  It's safer IMHO, as this
way you wouldn't have to worry about the primary key being propagated
to the new table (and accidentally forgetting to remove it).
 

I'm glad you pointed that out because you reminded me that when I tried 
the original idea from Greg Patnude in Mar 2005 using inheritance, I did 
indeed run into a problem with constraints. The problem there I think 
was that I had a check constraint on the table for which I created the 
audit log table, but the check constraint was defined in a different 
schema than the original table. Something about the way inheritance 
table creation works found this a problematic situation. I'll have to 
revisit that and see if using LIKE overcomes that problem.


I guess I originally thought using INHERIT rather than LIKE was that, 
having the audit history, I might at some point present a select view 
across both the base and descendant tables or something ("...if you 
record it, they (PHB's) will eventually ask for a report on it..."), but 
I haven't actually had an implementation where such an audit history 
table was actually required in production -- I'm just exercising the 
functionality and exploring the quirks in order to be prepared for when 
such a requirement is actually promulgated.


Any other significant distinquishing features of INHERIT verses LIKE for 
this kind of use that you (or others) can think of?



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Securing Postgres

2005-10-05 Thread Berend Tober

L van der Walt wrote:


I would like to secure Postgres completly.

Some issues that I don't know you to fix:
1.  User postgres can use psql (...) to do anything.
2.  User root can su to postgres and thus do anything.
3. Disable all tools like pg_dump

How do I secure a database if I don't trust the administrators.
The administrator will not break the db but they may not view
any information in the databse.


It may be just me and my silly old-fashion attitudes, but I kind of 
think that if your sys admin(s) cannot be trusted, you are pretty much 
screwed. And your hiring process needs fixing,


But being that as it may, maintaining physical security, i.e., keeping 
the host server in a locked room with restricted and recorded access and 
that requires at least two persons present so that collusion is required 
for tampering, disabling remote root login, granting limited sys admin 
privileges with sudo (which records the sudoer activities, for auditing 
purposes) might be a way to accomplish what you are looking for.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Securing Postgres

2005-10-05 Thread Berend Tober

Tom Lane wrote:


L van der Walt <[EMAIL PROTECTED]> writes:
 


...I can use encryption to protect the data.
   



If you think encryption will protect you against someone with root
privileges, you're sadly mistaken.  ...

All the same points hold for SQL Server of course --- the fact that you
weren't aware of these risks doesn't mean they don't exist.
 



OUCH!!


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Dump all except some tables?

2005-10-06 Thread Berend Tober

WireSpot wrote:

Is it possible to dump an entire database but to skip one or two 
tables? Or, conversely, to restore an entire dump except for one or 
two tables? (Although I'd prefer the first version.)


The only related option for both pg_dump and pg_restore is --table, 
which only takes 1 (one) table name. If only it accepted more than one 
I could've found a workaround.


Any idea, other than messing around with the dump file? I don't look 
forward to grepping a dump which is several tens of megabytes gzipped...


I'm considering doing a dump with --table for each table except the 
one or two in question. But I wonder, if I simply concatenate the 
resulting SQL dumps, will I get a valid dump? There are all kinds of 
foreign key contraints in place, and if the table data is not fed back 
in the right order it's useless.



I don't think you can limit the dump output precisely as you ask, but 
you can get the equivalent by doing a custom format dump, then use 
pg_restore to produce a archive listing, which you then edit so as to 
select specific objects you want to include/exclude, and then run 
pg_restore against that edited list file.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] getting around---division by zero on numeric

2005-10-19 Thread Berend Tober

Richard Huxton wrote:


Tim Nelson wrote:

I am getting division by zero on a calculated field ( sum(sales) is 0 ) 



It's a two-stage process, so you'll want a sub-query. Something like: ...


Thanks. That's a cool addition to my bag of tricks.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Duplicate Row Removal

2005-11-05 Thread Berend Tober

Dean Gibson (DB Administrator) wrote:


CREATE TABLE new_name AS SELECT DISTINCT * FROM old_name;

DROP TABLE old_name;

ALTER TABLE new_name RENAME TO old_name;



The problem with this technique is that it doesn't account for indexes, 
foreign key references, and other dependencies.


Another approach is to temporarily add an integer column, populate it 
with sequential values, and then use that new column to uniquely 
identify the rows that are otherwise duplicates. Then you can use 
aggregation to identify and delete the rows you don't need, followed by 
dropping the temporary extra column. HTH.


-- BMT



On 2005-11-04 17:15, Peter Atkins wrote:


All,

I have a duplicate row problem and to make matters worse some tables 
don't have a PK or any unique identifier.

Anyone have any thoughts on how to remove dups?




---(end of broadcast)---
TIP 6: explain analyze is your friend




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Most significant digit number formatting

2005-11-15 Thread Berend Tober
Say I want to format calculated numeric output to uniformly have a 
specific number of decimal places, say 3 digits right of the decimal 
point. My current understanding is that the TO_CHAR function is the 
appropriate way to accomplish this kind of numeric formatting. So for 
instance I might write


SELECT
project_number,
TO_CHAR(SUM(labor_hours), '999.999') AS total_hours
FROM labor_data

This is great as far as accomplishing the decimal part formatting, but 
it is only good provided I've included enough place holders for the 
integer part, i.e., in this example for numbers less than 1000, e.g.,


project_numbertotal_hours
05-100 ###.### (actual value is 10810.5)
05-125 285.000
05-150 404.500
05-200  44.000
05-54  66.000
05-59 ###.### (actual value is 2245.75)

So what I'm asking for is advice on how to create the least-significant 
digit formatting specifically, but without having to worry about 
exceeding the most-significant digit formatting specification. I don't 
see that explained in the documentation on TO_CHAR. 

I suppose on approach might to guess what the biggest number might be, 
and then include an order of magintude larger, e.g. 
TO_CHAR(SUM(labor_hours), '99.999') . But you know, good old Dr. 
Murphy, will undoubtly intervene and inevitably create a situation in 
which whatever reasonable provisions were made originally, the limits 
will at some point be exceeded, causing the world as we know it to come 
to an end.


Regards,
Berend Tober


begin:vcard
fn:Berend Tober
n:Tober;Berend
org:Seaworthy Systems, Inc.
adr:;;22 Main Street;Centerbrook;CT;06409;USA
email;internet:[EMAIL PROTECTED]
tel;work:860-767-9061
url:http://www.seaworthysys.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Most significant digit number formatting

2005-11-17 Thread Berend Tober


codeWarrior wrote:

If it is a numeric data column -- you probably want to use the "round" 
function:


SELECT round(1200.01, 3);
SELECT round(12.009, 2);

 

Interesting. I had tried that. After your message I tried again and 
encountered this interesting anomaly: while the ROUND function used in a 
query run in the SQL window of PgAdmin III does in fact force  output of 
trailing zero decimal digits to the extent specified, i.e.,


SELECT
   project_number,
   labor_hours,
   TO_CHAR(labor_hours, '999.999'),
   ROUND(labor_hours,3)
FROM time_data
LIMIT 5

"05-08",1974.," ###.###",1974.000
"05-100",10810.5000," ###.###",10810.500
"05-125",285.," 285.000",285.000
"05-150",404.5000," 404.500",404.500
"05-200",44.,"  44.000",44.000

Running the same query though a TQuery dataset object in Borland Delphi 
using the BDE truncates the trailing zeros from ROUND:


000-05-081974 ###.###1974
000-05-10010810.5 ###.###10810.5
000-05-125285 285.000285
000-05-150404.5 404.500404.5
000-05-20044  44.00044

That is why I didn't realize ROUND was an option, but for me it still is 
not since the report is produced by a Delphi application. I suppose I 
can accomplish this formatting programmatically within the Delphi 
application, but I was hoping to have the data base do it directly.


Thanks,
Berend Tober


"Berend Tober" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
 


Say I want to format calculated numeric output to uniformly have a
specific number of decimal places, 


...
begin:vcard
fn:Berend Tober
n:Tober;Berend
org:Seaworthy Systems, Inc.
adr:;;22 Main Street;Centerbrook;CT;06409;USA
email;internet:[EMAIL PROTECTED]
tel;work:860-767-9061
url:http://www.seaworthysys.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Multi-parameter aggregates.

2005-11-21 Thread Berend Tober
I'm interested in defining a covariance aggregate function. (As a 
refresher, remember that covariance is a little bit like variance, but 
is between two variables:


cov(X,Y)=   - ,

where the angular brackets in this case denote taking the averag. 
Variance is a special case when X and Y are the same.)


But the whole user-defined aggregate thing is tough to get a handle on. 
I'm not even sure if the direction I'm heading in below will actually 
work, but as far as I got, I'm stuck on not knowing how to define a 
aggregate that takes more that one variable as its argument, so its use 
in SQL would look like, e.g.,


SELECT company, COVAR(year, sales) FROM annual_sales GROUP BY company;

Here is what I tried, and I'm wondering if the team here can help me 
make this work (or tell me that the whole approach makes no sense, 
etc.). All the DDL executes without syntactical errors until the last 
function definition, and the problem is with the "  BASETYPE=numeric" 
line, i.e., "ERROR:  AggregateCreate: function 
covariance_accum(numeric[], numeric) does not exist"


CREATE TYPE public._covariance AS
   (n integer, x numeric, y numeric, xy numeric);


CREATE OR REPLACE FUNCTION public.covariance_accum(_covariance, numeric, 
numeric)

   RETURNS _covariance AS '
   BEGIN
   _covariance.n := _covariance.n+1;
   _covariance.x := _covariance.x+$2;
   _covariance.y := _covariance.x+$3;
   _covariance.xy:= _covariance.xy+($1*$2);
   END;
   'LANGUAGE 'plpgsql' IMMUTABLE STRICT;
GRANT EXECUTE ON FUNCTION public.covariance_accum(_covariance, numeric, 
numeric) TO public;
COMMENT ON FUNCTION public.covariance_accum(_covariance, numeric, 
numeric) IS 'covariance aggregate transition function';



-- Need to include a check for N equal zero data points

CREATE OR REPLACE FUNCTION public.numeric_covariance(_covariance)
   RETURNS numeric AS '
   BEGIN
   (_covariance.xy/_covariance.n) - 
(_covariance.x/_covariance.n)*(_covariance.y/_covariance.n);

   END;
   'LANGUAGE 'plpgsql' IMMUTABLE STRICT;
GRANT EXECUTE ON FUNCTION public.numeric_covariance(_covariance) TO public;
COMMENT ON FUNCTION public.numeric_covariance(_covariance) IS 
'covariance aggregate final function';




CREATE AGGREGATE public.covariance(
 BASETYPE=numeric,
 SFUNC=covariance_accum,
 STYPE=numeric[],
 FINALFUNC=numeric_covariance,
 INITCOND='{0,0,0,0}'
);


/*
--I also tried this:

CREATE AGGREGATE covariance(
 BASETYPE='numeric, numeric',
 SFUNC=covariance_accum,
 STYPE=numeric[],
 FINALFUNC=numeric_covariance,
 INITCOND='{0,0,0,0}'
);

-- to no avail.
*/

Regards,
Berend


begin:vcard
fn:Berend Tober
n:Tober;Berend
org:Seaworthy Systems, Inc.
adr:;;22 Main Street;Centerbrook;CT;06409;USA
email;internet:[EMAIL PROTECTED]
tel;work:860-767-9061
url:http://www.seaworthysys.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Multi-parameter aggregates.

2005-11-21 Thread Berend Tober

Hakan Kocaman wrote:


have you considered using pl/r.
http://www.joeconway.com/plr/

I think R got a covariance-function.
http://www.r-project.org/

 

That would be, like, the easy way. 


Thanks!
Berend


begin:vcard
fn:Berend Tober
n:Tober;Berend
org:Seaworthy Systems, Inc.
adr:;;22 Main Street;Centerbrook;CT;06409;USA
email;internet:[EMAIL PROTECTED]
tel;work:860-767-9061
url:http://www.seaworthysys.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Multi-parameter aggregates.

2005-11-21 Thread Berend Tober

Michael Fuhr wrote:


On Mon, Nov 21, 2005 at 11:03:22AM -0500, Berend Tober wrote:
 

I'm interested in defining a covariance aggregate function. 
   



I think aggregates must take a single value, so the above won't
work as written.  However, in PostgreSQL 8.0 or later you could
define the aggregate's base type to be a composite type 



Thanks. I briefly explored something like that, using the existing POINT 
data type, but didn't press it too far, pending mailling list advice.


Regards,
Berend

begin:vcard
fn:Berend Tober
n:Tober;Berend
org:Seaworthy Systems, Inc.
adr:;;22 Main Street;Centerbrook;CT;06409;USA
email;internet:[EMAIL PROTECTED]
tel;work:860-767-9061
url:http://www.seaworthysys.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Function name variable within a non-trigger function

2005-11-24 Thread Berend Tober
I know that within a trigger function the functin name can be referenced 
by the special variable TG_NAME, so I could include raise an exception 
that identified its source with a line like:


 RAISE EXCEPTION ''ERROR IN %'', TG_NAME;

Is there a similar set of special variables defined for "normal", i.e., 
non-trigger functions, too?




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Can we convert from Postgres to Oracle !!???

2006-10-17 Thread Berend Tober

Sandeep Kumar Jakkaraju wrote:


Can we convert from Postgres to Oracle !!???


Umm, this would be the wrong forum for that.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


  1   2   >