Re: [GENERAL] query and pg_dump problem on my postgresql 6.5.3/Redhat 6.2

2003-12-06 Thread Wind Wood
hi,
   The disk just had problem, I used fsck to fix it, But the problem
of database is still there.
  After I read the postgresql document and found this:
--
COPY stops operation at the first error. This should not lead to problems in the event 
of a COPY FROM, but the target relation will, of course, be partially modified in a 
COPY TO. The VACUUM query should be used to clean up after a failed copy.
--
 Then I Execute the sql "vacuum newses;" in psql, it return this message:
--
NOTICE:  Rel newses: Uninitialized page 16 - fixing
NOTICE:  Rel newses: Uninitialized page 17 - fixing
VACUUM
--

It seems VACUUM fixed something, then I retry the SQL complained error before,
they all work well now, my php page work well also.

It's exciting that all problems is gone, but I'm still not clear about what 
happened and what the VACUUM had done, anyone can explain it?


=== 2003-12-05 您在来信中写道:===

>On Thursday 04 December 2003 14:55, 吴德文 wrote:
>> Help!
>>
>> A few days ago, my php page began to complain this:
>> --
>> Warning: PostgresSQL query failed: pqReadData() -- backend closed the
>> channel unexpectedly. This probably means the backend terminated abnormally
>> before or while processing the request.
>[snip]
>> NOTE:
>> I'm on Redhat 6.2 with Postgresql 6.5.3, the database named "news",
>> and the table is "newses", looks like this (dumped from "pg_dump -s -t
>> newses news"):
>
>One of the developers will probably be able to help, but bear in mind many are 
>in the USA/Canada and so you might have time-zone delays. It will be 
>suggested you upgrade to 7.3.5 or 7.4.0 as soon as possible. That might mean 
>upgrading from RedHat 6.2 as well.
>
>At present:
>1. Dump all the other tables, if you can
>2. Stop PostgreSQL 
>3. make a file backup of /var/data (or wherever your data is stored)
>
>OK - now at least you know things can't get any worse.
>
>In psql you can use \a to set unaligned output and \o  to output 
>query results to a file. You can then try SELECT * FROM newses WHERE news_id 
>BETWEEN 1 AND 100, then 101-200 etc. This should let you recover a great deal 
>of your data if only one disk-block is damaged.
>
>From what you say, you should be able to recover your table's data. Then, I'd 
>recreate the database from your dumps.
>
>> But I found that pg_dump sometimes does not work on that very table, and
>> sometimes work with a long long time then error.
>
>This sounds like either a disk or memory error. I'd guess disk.
>
>-- 
>  Richard Huxton
>  Archonet Ltd
>
>

= = = = = = = = = = = = = = = = = = = =


致
礼!
 
   Wind Wood
   [EMAIL PROTECTED]
2003-12-05 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] ERRORs after upgrade from 7.2.1 to 7.4 about nonexisting large objects

2003-12-06 Thread Dirk Försterling
Hi,

sorry for reposting, but it seems my message just hit nothing
but air. If I posted to the wrong list or did something else
wrong with the message, please let me know.
I really want to know what's going on but still found nothing.
I do not know when and where the statements are produced that
lead to the error messages. It seems they were generated by
postgres internally but I cannot see when and why. How can
I find out? I want to stop those errors.
  -dirk

 -8<-8<8<

Hi all,

a few days ago, I upgraded from PostgreSQL 7.2.1 to 7.4, following
the instructions in the INSTALL file, including dump and restore.
All this worked fine without any error (message).
Since then, I found lots of the following in the postmaster output:

2003-11-29 15:19:54 [1359] ERROR:  large object 4838779 does not exist
2003-11-29 15:20:11 [1649] ERROR:  large object 4838779 does not exist
2003-11-29 15:20:11 [1657] ERROR:  large object 4838779 does not exist
2003-11-29 15:20:27 [1732] ERROR:  large object 4838779 does not exist
2003-11-29 15:20:49 [1956] ERROR:  large object 10204242 does not exist
2003-11-29 15:20:49 [1975] ERROR:  large object 4838779 does not exist
2003-11-29 15:21:49 [2784] ERROR:  large object 10204242 does not exist
2003-11-29 15:21:56 [3150] ERROR:  large object 10204242 does not exist
2003-11-29 15:22:49 [4053] ERROR:  large object 10204242 does not exist
2003-11-29 15:23:06 [4132] ERROR:  large object 4838779 does not exist
2003-11-29 15:24:47 [5114] ERROR:  large object 4838779 does not exist
2003-11-29 15:26:09 [6259] ERROR:  large object 4838779 does not exist
2003-11-29 15:26:27 [6515] ERROR:  large object 10204242 does not exist
I cannot find the source of those errors. It seems that everything is
basically working, but there are always those errors about the two
large objects.
I'd like to find out what went wrong and how I can repair it. I couldn't
find anything in the docs and the mailing lists.
To give more information about the messages, here are two more detailed
outputs, one per id:
003-11-29 21:42:43 [2533] LOG:  0: statement: BEGIN
LOCATION:  pg_parse_query, postgres.c:464
2003-11-29 21:42:43 [2533] LOG:  0: statement: select proname, oid from
pg_proc where proname = 'lo_open'
or proname = 'lo_close'or proname = 'lo_creat'
 or proname = 'lo_unlink'or proname =
'lo_lseek'
   or proname = 'lo_tell'  or
proname = 'loread'   or proname = 'lowrite'
LOCATION:  pg_parse_query, postgres.c:464
2003-11-29 21:42:43 [2533] ERROR:  42704: large object 10204242 does not exist
LOCATION:  inv_open, inv_api.c:128
2003-11-29 21:42:43 [2533] LOG:  0: statement: ROLLBACK
--8<-8<--

2003-11-29 21:43:48 [3047] LOG:  0: statement: BEGIN
LOCATION:  pg_parse_query, postgres.c:464
2003-11-29 21:43:48 [3047] LOG:  0: statement: select proname, oid from
pg_proc where proname = 'lo_open'
or proname = 'lo_close'or proname = 'lo_creat'
 or proname = 'lo_unlink'or proname =
'lo_lseek'
   or proname = 'lo_tell'  or
proname = 'loread'   or proname = 'lowrite'
LOCATION:  pg_parse_query, postgres.c:464
2003-11-29 21:43:48 [3057] LOG:  0: statement: set datestyle to 'ISO';
select version(), case when pg_encoding_to_char(1) = 'SQL_ASCII' then
'UNKNOWN' else get
databaseencoding() end;
LOCATION:  pg_parse_query, postgres.c:464
2003-11-29 21:43:48 [3047] ERROR:  42704: large object 4838779 does not exist
LOCATION:  inv_open, inv_api.c:128
2003-11-29 21:43:48 [3047] LOG:  0: statement: ROLLBACK
The logged statements seems to suggest that there went something wrong with
the template database creation or maybe the dump/restore. But as I wrote
earlier: There was no visible error.
  -dirk

--
   D i r k   F "o r s t e r l i n g
   [EMAIL PROTECTED]  http://[EMAIL PROTECTED]/
-
   A child of five could understand this. Fetch me a child of five!




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] query and pg_dump problem on my postgresql 6.5.3/Redhat 6.2

2003-12-06 Thread Martijn van Oosterhout
Hmm, you do realise that 6.5 is *really* *really* old. Who knows how many
bugs there are. There have been *5* major releases since and many more minor
ones.

I remember back then, 6.5 had many quirks that required a vacuum to fix (or
couldn't be fixed at all). I remember when creating a temporary table inside
a transaction that aborted left an orphand file on disk.

You're going to have a hard time getting someone to help you with a version
that old.

Good luck,

On Sat, Dec 06, 2003 at 03:15:05PM +0800, Wind Wood wrote:
> hi,
>The disk just had problem, I used fsck to fix it, But the problem
> of database is still there.
> After I read the postgresql document and found this:
> --
> COPY stops operation at the first error. This should not lead to problems in the 
> event of a COPY FROM, but the target relation will, of course, be partially modified 
> in a COPY TO. The VACUUM query should be used to clean up after a failed copy.
> --
>Then I Execute the sql "vacuum newses;" in psql, it return this message:
> --
> NOTICE:  Rel newses: Uninitialized page 16 - fixing
> NOTICE:  Rel newses: Uninitialized page 17 - fixing
> VACUUM
> --
> 
>   It seems VACUUM fixed something, then I retry the SQL complained error before,
> they all work well now, my php page work well also.
> 
>   It's exciting that all problems is gone, but I'm still not clear about what 
> happened and what the VACUUM had done, anyone can explain it?
> 
> 
> === 2003-12-05 ===
> 
> >On Thursday 04 December 2003 14:55, ?? wrote:
> >> Help!
> >>
> >> A few days ago, my php page began to complain this:
> >> --
> >> Warning: PostgresSQL query failed: pqReadData() -- backend closed the
> >> channel unexpectedly. This probably means the backend terminated abnormally
> >> before or while processing the request.
> >[snip]
> >> NOTE:
> >> I'm on Redhat 6.2 with Postgresql 6.5.3, the database named "news",
> >> and the table is "newses", looks like this (dumped from "pg_dump -s -t
> >> newses news"):
> >
> >One of the developers will probably be able to help, but bear in mind many are 
> >in the USA/Canada and so you might have time-zone delays. It will be 
> >suggested you upgrade to 7.3.5 or 7.4.0 as soon as possible. That might mean 
> >upgrading from RedHat 6.2 as well.
> >
> >At present:
> >1. Dump all the other tables, if you can
> >2. Stop PostgreSQL 
> >3. make a file backup of /var/data (or wherever your data is stored)
> >
> >OK - now at least you know things can't get any worse.
> >
> >In psql you can use \a to set unaligned output and \o  to output 
> >query results to a file. You can then try SELECT * FROM newses WHERE news_id 
> >BETWEEN 1 AND 100, then 101-200 etc. This should let you recover a great deal 
> >of your data if only one disk-block is damaged.
> >
> >From what you say, you should be able to recover your table's data. Then, I'd 
> >recreate the database from your dumps.
> >
> >> But I found that pg_dump sometimes does not work on that very table, and
> >> sometimes work with a long long time then error.
> >
> >This sounds like either a disk or memory error. I'd guess disk.
> >
> >-- 
> >  Richard Huxton
> >  Archonet Ltd
> >
> >
> 
> = = = = = = = = = = = = = = = = = = = =
>   
> 
> ??
> 
>
>Wind Wood
>[EMAIL PROTECTED]
>   2003-12-05 
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster

-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> "All that is needed for the forces of evil to triumph is for enough good
> men to do nothing." - Edmond Burke
> "The penalty good people pay for not being interested in politics is to be
> governed by people worse than themselves." - Plato


pgp0.pgp
Description: PGP signature


Re: [GENERAL] Misplaced modifier in Postgresql license

2003-12-06 Thread Bruce Momjian
Robert Treat wrote:
> > > All of the arguments about license changes have been gone over in great
> > > detail in the archives (I think the last major go-round on the topic was
> > > in the summer of 2000).  No one who has been around long enough to
> > > remember those flame wars is interested in re-opening the topic.  Not
> > > even just to move a comma.
> > 
> > What we could do is add a blurb on our web site or in the FAQ clarifying
> > this issue.
> > 
> 
> Oh, you mean like this paragraph 
> 
> "The above is the BSD license, the classic open-source license. It has
> no restrictions on how the source code may be used. We like it and have
> no intention of changing it."
> 
> Which you added to the FAQ damn near two years ago !?!
> 
> http://developer.postgresql.org/cvsweb.cgi/pgsql-server/doc/FAQ.diff?r1=1.139&r2=1.140

Oh, that is in there. Good.  Seems we don't need anything else on our
end.  Someone can clearly point to that if they have concerns about our
wording being misinterpreted.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] transaction in progress

2003-12-06 Thread Bruce Momjian
Rick Gigger wrote:
> Is there a convenient way to tell in postgres if a transaction has been
> started or not?

libpq has PQtransactionStatus, though I wonder why we don't have this
visible via a read-only GUC variable to make it easier for other languages.

Anyone?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] postgresql locks the whole table!

2003-12-06 Thread Bruce Momjian
Greg Stark wrote:
> 
> Dr NoName <[EMAIL PROTECTED]> writes:
> 
> > My question is why??? The two insert operations do not
> > conflict with each other (at least not in the
> > real-world situation). Also, why does the foreign key
> > make a difference?
> 
> It's not locking the whole table, it's locking the record that the foreign key
> references. Note that they're both referencing the same foreign key.
> 
> It does this because it's afraid someone will go and delete that key before
> the transaction commits. It has to take a lock that will prevent someone from
> deleting the record (or updating the referenced column).
> 
> Unfortunately the only lock to choose from is an exclusive write lock. That's
> overkill as you've noticed. I think this is something multiple people would
> like to fix by introducing shared locks, but I wouldn't expect a solution
> soon.

As I remember the implementation problem is that we do an exclusive row
lock right now by putting the transaction id on the row and set a
special row-lock bit in the tuple.  Shared locks need to store multiple
transaction ids, and that is where we are stuck.  Shared memory is of
finite size, and the number of proc/row combinations is infinite, so it
seems we will need some shared stucture with backing store to disk, and
that will be slow.

You know the maximum number of backends on startup, but I don't see how
that helps us.  If we could somehow know the unique combinations of
those backend ids that would be used for any row, we could reserve a
range of transactions ids to map them, but the number of combinations is
too large.

Our xid/command-counter is currently 64 bits, so if we only had a
maximum of 64 backends, we could use those bits to mark the backends
holding the locks rather than put the xid in there.  Of course, the
maximum number backends can change over time, so that isn't really a
solution but more a brainstorm, but I do think shared memory bitmaps
might be in the final solution.

One idea would be to allocate 10k of shared memory for a shared lock
bitmap.  Assuming a max 100 backend, that is 2500 lock combinations,
numbered 0-2499.  We would put the bitmap number on the rows rather than
the xid.  Of course, problems are that there are only 2500 combinations
supported, and transactions have to get an exclusive lock on transaction
commit to clear their backend bits from the bitmap table so the rows can
be reused. Another refinement would be to use the row xid to store
either the xid for single backend locks, and use the bitmap table number
only when there is sharing of row locks by multiple backends.  That
might reduce the contention on the bitmap table.  If a backend wasn't
involved in shared locks, its bit wouldn't be set in the bitmap table
and it has nothing to do, and it can read the table without a lock.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Making a tree with "millions and millions" of dynamic

2003-12-06 Thread Arjen van der Meijden
> Rick Gigger wrote:
> 
> I was glad to see this topic come up on the list as I was 
> about to start asking about some of these issues myself.  I 
> would like to discuss each of the methods I have researched 
> so far for doing trees in sql and see if anyone has some 
> experience or insite into the topic that could help me.

Have a look here: http://www.scenic-route.com/program/db/lists.htm
It is quite a nice overview with some advantages and disadvantages of
each method.

> I have a lot of thouhts on this but here is my first question:
>
> In the first article at the end of the section on materialized paths
> and the beginning of the nested set section Tropashko basically says
> that neither one really does a good job at answering "who are all of
> my ancestors" but that the materialized path method has a nice kludge
> that you can do with some funky string manipulation.

I don't see how the nested intervals-query is better/faster than the
nested set approach, but both seem to be possibly using indices.
The nested set uses a simple "where left between parent.left and
parent.right", if you happen to have those numbers in your application's
memory, you could even build your query directly like that and save
yourself a self join.
Another approach, I used, is something like:
Select * from nset n1, (select (distinct) * from nset where nset.id =
$parentid) as n2 where n1.left between n2.left and n2.right
(the distinct will force a materialization, which might improve the
performance or not, in my tests it did)

With the materialized path you'll have to use a like-search (child.path
like parent.path || '%'), where hopefully the optimiser notices that it
can chop off the end of the string and just look at the first part

> Is this true?  Celko in his articles seems to make it sound
> like this query will be very fast.  But Tropashko it seems
It'll use an index range-search, so I don't see how it can be faster,
except for a hash/index lookup or so (which is possible with the
transitive closure version of an adjacency list).

> is saying that it will be slow. Am I reading this right?  If
> so is Tropashko right?  Any ideas on this?  Any articles / papers
> that might shed some light on this specific question or the
> topic in general?
The above article is a nice one to read I think.

I'll try to summarize my thoughts on them, since I'm looking into an
efficient tree approach myself aswell:
All methods have both advantages and disadvantages and it'll depend on
your situation whether that is a showstopper or not. All are bad when
you want to remove a node from the tree, but the insert/update penalties
depend on the situation.

* The adjancency list is lightning fast with inserts, updates but a bit
clumsy with deletions (you need to connect the direct children to
another element, that's all). It'll be very fast with the two simple
queries "who are my direct children" and "who is my parent", but you
need either a recursive approach or some enhancement like a transitive
closure-graph to enhance queries like "who are my predecessors" and "who
are all my ancestors".

So if you'll have a large tree, only few a levels deep and a lot of
inserts and/or subtree movements, this seems to be the best approach.
And you can store "max int"-number of nodes.

* The materialized path is not so very efficient in terms of storage, it
does inserts fast, but updates and deletions are slow. Retrieval of all
ancestors is fast, retrieval of the predecessors is extremely trivial,
but retrieval of the direct parent and/or children is somewhat more
difficult (you'll need to look at the depth, if you do that often a
functional-index might be handy).
Perhaps the ltree-implementation in contrib/ltree is worth a look and
encoding the trees in much more dense encodings (like a 256bit encoding
I saw mentioned earlier and skipping the dot for just a single-bit
terminator) makes it less inefficient.

So if you do a lot of inserts, not so many updates and deletes and have
much need for the entire list of predecessors/ancestors, I suppose this
one is quite nice. But it is not so efficient in terms of storage, so
very large/deep trees impose a huge overhead since each node stores its
entire path.

* The nested set is inefficient with insertions, updates and deletions
but much more efficient with storage than the MP. To speed up the
process of selecting only the direct parent/children you can use a depth
field (redundant, but handy) which'll save you one or two extra
selfjoins.
For relatively static trees the nested set is good, but actively
changing trees are very bad for your performance. You can store "max
int"/2 nodes in this approach.

* The nested intervals use the same approach as the nested set, but uses
a static encoding. This allows you to do very efficient inserts, so the
author claims. I have my doubts dough, if you don't know the number of
children of your parent and just want to do "connect this child to that
parent", it'll be more difficu

Re: Any *current* summary of postgres-r 7.2 status? (was Re: [GENERAL]

2003-12-06 Thread Bruce Momjian

Postgres-R is not ready for production, and development seems to have
stopped., and I don't know what multi-master solutions we can really
offer right now.  We do have several master/slave replication solutions.

---

Bopolissimus Platypus wrote:
> On Thursday 04 December 2003 00:34, Jan Wieck wrote:
> 
> > You must have misunderstood something there. The whole idea of
> > Postgres-R is how to "avoid" the need for 2-phase commit in a
> > synchronous replication system.
> 
> I've been looking at postgres-r and other postgres replication
> temporary solutions.  does anyone have a good *current* summary
> of postgres-r status?  from the website it looks like the working model 
> is based on 6.2.  is postgres-r 6.2 supposed to be good enough 
> for production?  that's what i'd assume "working model" means, 
> but confirmation from people who've actually stress tested it would 
> be great :).
> 
> the 7.2 version is the development version.  i don't see much detail 
> (documentation) though on how stable the 7.2 version is.  looking 
> at the docs in the download itself, everything seems to refer to generic 
> 7.2. i don't see anything that looks like postgres-r specific documentation.
> so i can't tell what things are implemented and what things still
> need to be implemented (but of course i might just not be looking
> hard enough for the data. to be honest, i'm just grepping for 
> "replication", opening files that look like they might be interesting 
> and scanning them, it's not like i've read everything in the doc 
> directory).  
> 
> in doc/TODO.detail/replication, there's a bunch of emails, but the
> last one seems to be from 2001.
> 
> there's src/backend/replication/README, but that seems to discuss
> only 6.2 work. i could do a diff, i suppose between the working model
> and 7.2, but i can't right now.  i'm behind a dialup modem and it would
> take too long to download 6.2 :).  maybe tomorrow at the office :).
> 
> but again, if someone has detailed and up-to-date information (summary
> or in detail) on the 7.2 postgres-r status, that would be incredibly
> cool :).
> 
> hmmm, there's a mailing list.  archives at:
> 
> http://gborg.postgresql.org/pipermail/pgreplication-general/
> 
> i didn't see any detailed summary of what's missing in 7.2 though.
> oh well, i'll just set up some test servers and just play with
> it for now, i guess.
> 
> thanks for any information.
> 
> tiger
> 
> -- 
> Gerald Timothy Quimpo  gquimpo*hotmail.com tiger*sni*ph
> http://bopolissimus.sni.ph
> Public Key: "gpg --keyserver pgp.mit.edu --recv-keys 672F4C78"
> 
> for whom self-parody appears to be a form of self-realization
>   -- http://slate.msn.com/id/2090554/?0si=-
> 
> 
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] xor for text

2003-12-06 Thread Tino Wildenhain
Hi Berend,

[EMAIL PROTECTED] wrote:
Anyone have a good function for byte-by-byte XOR of text or varchar data
(or does one exist already in pg which I missed in the documentation)?
~Berend Tober
You should have a look at pg_crypt which offers a wider range
of cryptographic functions for postgres. I dont think there
is a xor yet, but otoh, xor is inherently insecure if its
cryptography you are trying.
However you might be able to use the examples to write
your own XOR function or use one of the other language
modules to do so.
HTH
Tino Wildenhain
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Feedback? Is there interest in a PostgreSQL-based CRM suite friendly to shared hosting?

2003-12-06 Thread Chris Travers
Hi all;

This seems at least somewhat on-topic here, so at the risk of seeming
shamelessly self-promoting, I figured I would ask.

I am the main developer of an open source CRM suite
(http://hermesweb.sourceforge.net).  We are in the beta-testing of our
0.3.0 release which will include experimental support for shared hosting
environments.  I would be interested in helping any interested parties
beta test the software.  Any feedback would also be greatly appreciated.

One of the main focuses of this upcoming version is that it will have at
least experimental support for shared hosting environments under
PostgreSQL (only).  It accomplishes this by building on its portability
features.  Here is the basic design and structure.  Any feedback on this
part would be appreciated as well.

In order to maintain a consistent administrative interface across
database managers, my program maintains its own catalog of permissions,
group memberships etc.  One of the main differences is that tables are
clustered together into modules and permissions are assigned to these
modules.  This helps ensure that the permissions remain somewhat
internally consistent and that it is easy to administrate.  In a
dedicated hosting environment, these permissions, when "activated" are
then translated into database permissions, and users login with their
own database accounts.

The shared hosting layer builds on the applications'
user/group/permissions catalogs and adds:

1)  A function sh_authenticate(username, password) which creates a
temporary table with the authentication credentials.  sh_su and
sh_revert_user are added to further manage identities.

2)  Every non-catalog table has a rule which raises an exception if a
SELECT query is issued when no select permissions exist.

3)  Every table has a trigger which raises an exception if insert,
update, or delete queries are run without appropriate permissions.

The advantage of this approach is that, with the exception of the
initial authentication, that the application doesn't have to know what
has changed.  This also adds quite a bit of defense in depth.

However I have a few questions:

1) How effectively can triggers be applied to temporary tables?  I would
like to add this to control access to the credential temporary table.

2)  I am assuming that this could create a reasonable performance
overhead.  But since this environment is likely to be most often used by
small-to-midsized businesses, is this likely to be a problem?

Best Wishes,
Chris Travers


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] transaction in progress

2003-12-06 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> libpq has PQtransactionStatus, though I wonder why we don't have this
> visible via a read-only GUC variable to make it easier for other languages.

Because it'd be useless --- the only way to interrogate it would be via
SHOW or some other SQL command, which would have the effect of changing
your transaction status, at least with client libraries that implement
autocommit.  Not to mention that the SHOW itself would fail if you're
in a failed transaction.

You have to have an out-of-band mechanism for checking xact status.
Thus PQtransactionStatus.  AFAIK there are already equivalent APIs
in other popular libraries.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: Any *current* summary of postgres-r 7.2 status? (was Re: [GENERAL]

2003-12-06 Thread Bopolissimus Platypus
On Saturday 06 December 2003 20:50, Bruce Momjian wrote:
> Postgres-R is not ready for production, and development seems to have
> stopped., and I don't know what multi-master solutions we can really
> offer right now.  We do have several master/slave replication solutions.

thanks.  i'll look at the master/slave solutions.  i don't really need
multi-master anyway, it's just that postgres-r got some press and i
looked at it first :).  which master/slave solutions do you recommend?

tiger

-- 
Gerald Timothy Quimpo  gquimpo*hotmail.com tiger*sni*ph
http://bopolissimus.sni.ph
Public Key: "gpg --keyserver pgp.mit.edu --recv-keys 672F4C78"

   Doveryai no proveryai.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html