Re: [HACKERS] Adding VERSION to startup log output

2004-07-05 Thread Simon Riggs
On Sat, 2004-07-03 at 15:26, Michael Brusser wrote:
> > Would anybody object to adding an extra line at startup that shows the
> > version number of the database system?
> > 
> > Message suggestion:
> > Starting PostgreSQL  database system
> > 

> If you do that, does it make sense to display the full version,
> instead of what's stored in PG_VERSION?
> (e.g. 7.3.4 vs. 7.3)

Hmmm...I think that begs the question, from where?

...sounding less good now.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] LinuxTag wrapup

2004-07-05 Thread Andreas Pflug
Kaare Rasmussen wrote:
   

That would be great news indeed. Currently there is a lack of an Open Source 
heavy duty database design tool.

If it can be compared to Erwin, it will be a big win - if it can do both 
reverse and forward engineering of databases.
 

It's is aimed to replace ErWIN or AppModeler/PowerDesigner.
Do you know if it will support triggers, FK's, functions, schemas, etc ?
 

FKs for sure, schemas probably too.
Triggers are db specific, and thus virtually non-supportable if the 
model should be database independent. Still, I have some ideas how to 
create models targeted at more than one db system supporting views 
(which often need to be designed individually for performance reasons) 
and triggers. Same about functions.

Regards,
Andreas

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Adding column comment to information_schema.columns

2004-07-05 Thread Andreas Pflug
Justin Clift wrote:
Christopher Kings-Lynne wrote:

Anyone who's writing queries that are examing the schema of the 
database is by definition not a newbie...

By newbie here, I mean someone who's a PG "newbie" but has a 
reasonable understanding of databases (i.e. Oracle, etc) would 
generally find the "information_schema" much easier to locate and use 
information in compared to having to learn the PG internals.

Instead of querying views, a newbie should use some of the guis that 
deliver all information for free, in a plainly understandable manner.

Regards,
Andreas

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] nested-xacts cursors (was Re: Performance with new nested-xacts code)

2004-07-05 Thread Zeugswetter Andreas SB SD

> begin;
>   declare cursor c ...;
>   fetch 1 from c; -- returns tuple 1
>   begin;
> fetch 1 from c;   -- returns tuple 2
>   rollback;
>   fetch 1 from c; -- returns tuple 1 again
> 
> This is mightly ugly but I think it's the most usable of the options
> seen so far.

Imho most usabel would be to handle the cursor like a hold corsor.

begin;
   declare cursor c ...;
   fetch 1 from c;  -- returns tuple 1
   begin;
 fetch 1 from c;-- returns tuple 2
   rollback;
   fetch 1 from c;  -- returns tuple 3

For me the reason is, that most likely you are not going to rollback
because the fetch did not work or returned something you don't like.
Most likely some consequent action did not work out, and the next step 
will be to correct (or ignore) the problem. You can do that without an 
extra fetch, because you still have the values in host variables.

resetting to "tuple 1" imho opens the door for endless loops. 

Andreas

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] nested-xacts cursors (was Re: Performance with new

2004-07-05 Thread Oliver Jowett
Tom Lane wrote:
BTW, I've been more or less ignoring the nearby debate about whether
cursors ought to roll back at subxact abort or not, because right now
I don't know how to implement *either* behavior.  Unless we have
credible theories about how to implement both, it's a bit useless to
debate which is better.
If/when you have a choice -- the JDBC driver code gets easier if you 
don't roll back at abort. If rollback of cursor state can happen, then 
the driver will need to preserve client-side state associated with each 
cursor (i.e. the driver's idea of its position) at the start of each 
subtransaction so it can restore it on rollback -- or always use FETCH 
ABSOLUTE (which I think needs SCROLL cursors?)

-O
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] LinuxTag wrapup

2004-07-05 Thread Andrew McMillan
On Sun, 2004-07-04 at 13:11 +0200, Andreas Pflug wrote:
> 
> That's right, and initially they will only serve MySQL, but it will be 
> extendable to support any db system. It will be GPL (or licenseable, but 
> since it's a tool and not a platform IMHO GPL is ok).
> If things work out as they seem, I'd contribute the pgsql stuff.

The fact that it is written in Kylix might make this harder.  I was
looking at it last night, after your post, to see if I could package it
for Debian, but that Kylix requirement just kind of killed any ideas I
had in that direction, since I've never been able to get the environment
to even install on a Debian system :-(

Regards,
Andrew McMillan.

-
Andrew @ Catalyst .Net .NZ  Ltd,  PO Box 11-053, Manners St,  Wellington
WEB: http://catalyst.net.nz/PHYS: Level 2, 150-154 Willis St
DDI: +64(4)803-2201  MOB: +64(272)DEBIAN  OFFICE: +64(4)499-2267
Whereof one cannot speak, thereon one must remain silent. -- Wittgenstein
-


signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] LinuxTag wrapup

2004-07-05 Thread Andreas Pflug
Andrew McMillan wrote:
On Sun, 2004-07-04 at 13:11 +0200, Andreas Pflug wrote:
 

That's right, and initially they will only serve MySQL, but it will be 
extendable to support any db system. It will be GPL (or licenseable, but 
since it's a tool and not a platform IMHO GPL is ok).
If things work out as they seem, I'd contribute the pgsql stuff.
   

The fact that it is written in Kylix might make this harder.  I was
looking at it last night, after your post, to see if I could package it
for Debian, but that Kylix requirement just kind of killed any ideas I
had in that direction, since I've never been able to get the environment
to even install on a Debian system :-(
 

alea non est iacta.
It's currently unknown how the follow-up will be coded. I'm contacting 
Mike Zinner, stay tuned.

Regards,
Andreas
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[HACKERS] FreeOSZoo project announcement

2004-07-05 Thread Jean-Michel POURE
Dear friends,

This is an off-topic message about a new project called FreeOSZoo involving 
two pgAdmin member, Jean-Michel and Raphaël.

You can visit our site on:
http://www.freeoszoo.org

We provide packages for QEMU, a fast PC emulator for GNU/Linux, Win32 and Mac 
OS X. The packages and instructions can be found on the downloading page:

http://www.freeoszoo.org/download.php

In the next months, the project will allow QEMU PC emulator to connect to the 
FreeOSZoo library of free operating systems, download and use a guest Free 
Operating System in a matter of minutes.

This way, we hope to be able to convince people that are not aware of the 
existence of Free Software to migrate to Free tools, without leaving their 
usual environment and Operating System.

QEMU presently has a working but limited GUI. If anyone of you is willing to 
work on a multi-platform GUI for QEMU, your help is kindly welcome.

Kindest regards,
FreeOSZoo Team

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [Re] Re: [HACKERS] PREPARE and transactions

2004-07-05 Thread Jeroen T. Vermeulen
On Mon, Jul 05, 2004 at 10:21:26AM +1200, Oliver Jowett wrote:
 
> It certainly affects the JDBC driver -- the native String representation 
> in Java is UTF-16, so the driver transcodes between that and 
> client_encoding for parameterized queries and query results involving 
> strings.
 
Oops, yeah, I forgot.  So perhaps we should be distinguishing several
layers in a session's state, along the lines of:

SQL session - temp tables, session variables, database contents
Interchange - encoding & representation
Protocol - COPY, bind/execute &c.
Connection - socket stuff


> So at least from that point of view, client_encoding is very much a 
> protocol-level thing. Much as I see PREPARE :)

The Interchange layer is the ugly stepchild here; it's controlled at the
SQL level but should be handled either by the application or in middleware,
together with the Protocol layer.  The model really seems to assume that it
belongs in the application, which in your case of course is not an option.
If they were placed at the driver level (together with Protocol) then I'd
see how they might as well be nontransactional.  Are there even significant
uses of session variables other than controlling the Interchange layer?

Transactions come into play at the Protocol level, and the way things are
implemented, go all the way up to SQL level.  Only the Connection level is
entirely nontransactional, and the SQL layer to my intuition ought to be
exclusively transactional.  The only current exception to that that springs
to mind is the way PREPARE is implemented.


Jeroen


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [Re] Re: [HACKERS] PREPARE and transactions

2004-07-05 Thread Oliver Jowett
Jeroen T. Vermeulen wrote:
So perhaps we should be distinguishing several
layers in a session's state, along the lines of:
SQL session - temp tables, session variables, database contents
Interchange - encoding & representation
Protocol - COPY, bind/execute &c.
Connection - socket stuff
That's not a bad model -- now we get to argue about what goes where ;)
Transactions come into play at the Protocol level, and the way things are
implemented, go all the way up to SQL level.  Only the Connection level is
entirely nontransactional, and the SQL layer to my intuition ought to be
exclusively transactional.  The only current exception to that that springs
to mind is the way PREPARE is implemented.
Other exceptions I can think of are FETCH and DEALLOCATE. DEALLOCATE is 
particularly fun -- don't most of the arguments for making PREPARE 
transactional also apply to DEALLOCATE? Is it actually feasible to roll 
back a DEALLOCATE?

-O
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [Re] Re: [HACKERS] PREPARE and transactions

2004-07-05 Thread Jeroen T. Vermeulen
On Mon, Jul 05, 2004 at 11:44:08PM +1200, Oliver Jowett wrote:

> > SQL session - temp tables, session variables, database contents
> > Interchange - encoding & representation
> > Protocol - COPY, bind/execute &c.
> > Connection - socket stuff

> >Transactions come into play at the Protocol level, and the way things are
> >implemented, go all the way up to SQL level.  Only the Connection level is
> >entirely nontransactional, and the SQL layer to my intuition ought to be
> >exclusively transactional.  The only current exception to that that springs
> >to mind is the way PREPARE is implemented.
> 
> Other exceptions I can think of are FETCH and DEALLOCATE. DEALLOCATE is 

Well, I'd say DEALLOCATE is implicitly lumped in with PREPARE.  But as for
FETCH, are you referring to cursors that live outside transactions (but get
manipulated inside transactions)?  Are those implemented yet and if so, how
does FETCH work there?  There's just been a discussion here about how
nested transactions should not be allowed to FETCH from cursors defined in
a wider scope for precisely this reason: to ensure neat transactional
behaviour.


> particularly fun -- don't most of the arguments for making PREPARE 
> transactional also apply to DEALLOCATE? Is it actually feasible to roll 
> back a DEALLOCATE?

I can see how it gets a bit more complicated when you DEALLOCATE, then
PREPARE a new statement with the same name in the same transaction.  But
nothing that a transaction-local mapping (plus "being deleted" bit) can't
fix, AFAICS.


Jeroen


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [GENERAL] creating a complex aggregate function

2004-07-05 Thread Richard Huxton
Najib Abi Fadel wrote:
Hi
i have an ordered table of dates let's say:
No you don't (tables aren't ordered), and I think that's what's going to 
cause you trouble.

1/1/2004
8/1/2004
15/1/2004
29/1/2004
5/2/2004
12/2/2004
 
I am searching for a way to have the minimum date and maximum date for 
dates seperated by one week whitout gaps between them in a string.
which will give the following output:
1/1/2004:15/1/2004;29/1/2004:12/2/2004;
 
I was thinking of doing this with an aggregate function.
 
So i thought about writing the following C code :
My C is even rustier than yours, but you're assuming here that the dates 
you receive will be passed to you in order. I don't think PG guarantees 
this (perhaps with the exception of an explicit sort in a subselect). 
That's not to say it won't work when you test it, just that the order 
isn't guaranteed so you can't rely on it.

Now, for sum()/min() etc this doesn't matter, you only need to compare 
the current value with a "running total", but in your case you'll need 
to match against many different groups.

I think what you want here is a set-returning function, doing something 
like:

last_date := null;
FOR myrow IN SELECT id,tgt_date FROM my_dates ORDER BY tgt_date LOOP
  diff := myrow.tgt_date - last_date;
  IF diff = 7 THEN
last_date:=myrow.tgt_date;
  ELSE
-- Assemble a result row and return it here
  END IF;
END LOOP
The above is (roughly) plpgsql syntax.
text * concat(,){} // NOT IMPLEMENTED (HOW TO DO IT ??)
The function you want is textcat(text,text). I believe all of the 
operators (|| + - etc) have equivalent functions. They're not listed in 
the user documentation, but "\df text*" will show you them.

HTH
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [Re] Re: [HACKERS] PREPARE and transactions

2004-07-05 Thread Oliver Jowett
Jeroen T. Vermeulen wrote:
Well, I'd say DEALLOCATE is implicitly lumped in with PREPARE.  But as for
FETCH, are you referring to cursors that live outside transactions (but get
manipulated inside transactions)?  Are those implemented yet and if so, how
does FETCH work there? 
I'm thinking WITH HOLD cursors -- they've been around since 7.4. 
7.4/7.5's behaviour leaves the cursor state unchanged by the rollback:

DECLARE foo CURSOR WITH HOLD FOR SELECT * FROM sometable
BEGIN
   FETCH FORWARD 10 FROM foo   -- returns rows 1..10
ROLLBACK
BEGIN
   FETCH FORWARD 10 FROM foo   -- returns rows 11..20
ROLLBACK
There's just been a discussion here about how
nested transactions should not be allowed to FETCH from cursors defined in
a wider scope for precisely this reason: to ensure neat transactional
behaviour.
This breaks things like JDBC that want to use cursors to batch access to 
a large resultset. Saying that you can't access resultsets created 
before opening a new subtransaction (or equivalently, before a 
SAVEPOINT) -- but only if the driver has decided to use a cursor behind 
the scenes! -- is a pretty draconian requirement and certainly isn't in 
the JDBC spec anywhere. Iterating through a resultset emitting updates 
is a pretty common model, and you may well want a savepoint just before 
starting on the updates.

I don't like rollback of FETCH for much the same reasons as I don't like 
rollback of PREPARE -- lots more work on the client side. See my mail on 
the other thread. Avoiding changing the behaviour of FETCH in the above 
case is also an argument against it.

-O
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[HACKERS] Relay Access Denied

2004-07-05 Thread John Hansen
Seems mail.postgresql.org refuses mail for [EMAIL PROTECTED]
at present?

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Applying patches

2004-07-05 Thread Bruce Momjian
Bruce Momjian wrote:
> Because I am traveling, I will no longer be reviewing and applying
> patches until I return on July 3.  Tom will take over most of this duty.
> I will check things once I get back to be sure everything got in just so
> Tom can relax knowing someone will make sure nothing was missed.
> 
> Also, I have learned I might have no connectivity in Armenia from June
> 26-July 3.  Until then, I am in Germany where connectivity will be
> available but periodic.

I am back.  I actually had good connectivity in Armenia.  They had
wireless in the conference rooms.  However, I was so busy I only had
time to read email addressed specifically to me.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[HACKERS] [Fwd: Re: plperl, cvs head w/spi patch, return rows on update returns nothing]

2004-07-05 Thread Andrew Dunstan
Yes, The API is changed slightly, which I should have made clear.
The number of rows is reported in a member called 'processed'. If actual 
data is returned (i.e. from a select), then that data is returned in the 
'rows' member (it will be a ref to an array of refs to hash.

Run this sample script (requires plperlu) and you will see what's happening:
create function showme(text) returns text language plperlu as $$
my $query = shift;
my $rv = spi_exec_query($query);
use Data::Dumper;
return Dumper($rv);
$$;
create table tst(i int, v text);
insert into tst values(2,'two');
select showme('select * from tst');
select showme($$insert into tst values(3,'three')$$);
select showme($$create table tst2(q int, r text)$$);
select * from tst;
select showme($$update tst set i = i + 1$$);
select * from tst;
cheers
andrew
elein wrote:
The README said that $rv->{rows} should return the number of
rows affected for INSERT, UPDATE & DELETE.  It seems to return NULL.
@{$rv->{rows}} also returns NULL.
-- drop table users ( email text, who text );
create table users ( email text, who text );
insert into users values ('[EMAIL PROTECTED]', USER);
create or replace function upd_user( text )
returns integer as
'
my $email = $_[0];
my $qry = "update users set email=''".$email."'' where who = USER ; ";
my $rv = spi_exec_query( $qry );
elog NOTICE, $qry;
elog NOTICE, $rv->{status};
elog NOTICE, $rv->{rows};
return $rv->{rows};
' language 'plperl';
select val_user('[EMAIL PROTECTED]');
 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] Relay Access Denied

2004-07-05 Thread John Hansen
Disregard,

Was sent to postgresql.com

...
 

-Original Message-
From: John Hansen 
Sent: Monday, July 05, 2004 11:21 PM
To: [EMAIL PROTECTED]
Subject: Relay Access Denied

Seems mail.postgresql.org refuses mail for [EMAIL PROTECTED]
at present?

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[HACKERS] strange bug in plperl

2004-07-05 Thread Andrew Dunstan
Can anyone suggest why I might be seeing this effect (each notice comes 
out once per row plus once per function call)

thanks
andrew
andrew=# create function tstset() returns setof tst language plperl as $$
andrew$# elog(NOTICE,"tstset called");
andrew$# return [{i=>1,v=>"one"},{i=>2,v=>"two"}];
andrew$# $$;
CREATE FUNCTION
andrew=# select * from tstset();
NOTICE:  tstset called
NOTICE:  tstset called
NOTICE:  tstset called
i |  v 
---+-
1 | one
2 | two
(2 rows)

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Bug in PL/Perl CVS head w/spi patch

2004-07-05 Thread Andrew Dunstan
I am unable to reproduce this effect.
The script below (slightly adapted from yours) runs just fine repeatedly 
in the same session with expected results.

create or replace function plperl_bug1( )
returns text as
$$
 my $qry = 'select * from pg_user';
 elog NOTICE, "Good Query is: $qry";
 my $rv = spi_exec_query( $qry );
 elog NOTICE, "status: $rv->{status} processed: $rv->{processed}";
 return $qry;
$$ language plperl;
select plperl_bug1();
create or replace function plperl_bug2( )
returns text as
$$
 my $qry = "select * from pg_user";
 elog NOTICE, "Bad Query is: $qry";
 my $rv = spi_exec_query( $qry );
 elog NOTICE, "status: $rv->{status} processed: $rv->{processed}";
 return $qry;
$$ language plperl;
select plperl_bug2();
cheers
andrew
elein wrote:
--
-- An SQL error causes subsequent function creation of an otherwise
-- healthy function to fail WHEN RUN with:
--ERROR:  creation of function failed:
--(in cleanup) Undefined subroutine &PLPerl::mksafefunc called at (eval 4) line 4.
 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] strange bug in plperl

2004-07-05 Thread Darko Prenosil
Because that is exactly count of "tstset" function being called. Set returning 
functions are called recursively until SRF_RETURN_DONE is returned, and that 
in You case means until last row is fetched.

When You programming functions in "C", there is SRF_ISFIRST_CALL function that  
returns "true" if function is called for the first time, so You can write 
something like this:

if (SRF_ISFIRST_CALL())
{
//Code that executes only once
}
else
{
//Code that executes per row
}


I do not know how this works with plperl, and this could be a bug, because 
only "return [{i=>1,v=>"one"},{i=>2,v=>"two"}];" should be executed more than 
once (that is the way it is working in pl/psql). 
I'm sorry I can't help more, but do not know much about plperl :-(
Hope some plperl guru will know more...

Regards !

On Monday 05 July 2004 15:33, Andrew Dunstan wrote:
> Can anyone suggest why I might be seeing this effect (each notice comes
> out once per row plus once per function call)
>
> thanks
>
> andrew
>
> andrew=# create function tstset() returns setof tst language plperl as $$
> andrew$# elog(NOTICE,"tstset called");
> andrew$# return [{i=>1,v=>"one"},{i=>2,v=>"two"}];
> andrew$# $$;
> CREATE FUNCTION
> andrew=# select * from tstset();
> NOTICE:  tstset called
> NOTICE:  tstset called
> NOTICE:  tstset called
>  i |  v
> ---+-
>  1 | one
>  2 | two
> (2 rows)
>
>
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] strange bug in plperl

2004-07-05 Thread Andrew Dunstan
OK, thanks. I see where the problem is. We'll fix the SRF code.
cheers
andrew
Darko Prenosil wrote:
Because that is exactly count of "tstset" function being called. Set returning 
functions are called recursively until SRF_RETURN_DONE is returned, and that 
in You case means until last row is fetched.

When You programming functions in "C", there is SRF_ISFIRST_CALL function that  
returns "true" if function is called for the first time, so You can write 
something like this:

if (SRF_ISFIRST_CALL())
{
//Code that executes only once
}
else
{
//Code that executes per row
}
I do not know how this works with plperl, and this could be a bug, because 
only "return [{i=>1,v=>"one"},{i=>2,v=>"two"}];" should be executed more than 
once (that is the way it is working in pl/psql). 
I'm sorry I can't help more, but do not know much about plperl :-(
Hope some plperl guru will know more...

Regards !
On Monday 05 July 2004 15:33, Andrew Dunstan wrote:
 

Can anyone suggest why I might be seeing this effect (each notice comes
out once per row plus once per function call)
thanks
andrew
andrew=# create function tstset() returns setof tst language plperl as $$
andrew$# elog(NOTICE,"tstset called");
andrew$# return [{i=>1,v=>"one"},{i=>2,v=>"two"}];
andrew$# $$;
CREATE FUNCTION
andrew=# select * from tstset();
NOTICE:  tstset called
NOTICE:  tstset called
NOTICE:  tstset called
i |  v
---+-
1 | one
2 | two
(2 rows)
---(end of broadcast)---
TIP 8: explain analyze is your friend
   

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [Re] Re: [HACKERS] PREPARE and transactions

2004-07-05 Thread Jeroen T. Vermeulen
On Tue, Jul 06, 2004 at 12:17:50AM +1200, Oliver Jowett wrote:
 
> 7.4/7.5's behaviour leaves the cursor state unchanged by the rollback:
> 
> DECLARE foo CURSOR WITH HOLD FOR SELECT * FROM sometable
> 
> BEGIN
>FETCH FORWARD 10 FROM foo   -- returns rows 1..10
> ROLLBACK
> 
> BEGIN
>FETCH FORWARD 10 FROM foo   -- returns rows 11..20
> ROLLBACK
 
That makes me wonder why people want to maintain transactionality w.r.t.
nested transactions but not with "outer" ones.  Odd!

I can see the technical problem, of course, although I think it should
respect rollbacks if at all possible without sacrificing significant
performance *in the commit case.*  Verify for failure, but optimize for
success.  Even if the cursor cannot go backwards I'd rather see those rows
buffered and "spliced back" into the cursor's stream on rollback.  Perhaps
the reasoning is that you process the batches returned by the cursor
inside a transaction, and have separate error handling for failed batches.

But then the FETCH should still occur before the transaction as far as I'm
concerned.  You fetch a batch (if it fails, you terminate) and *try* to
process it.


> >There's just been a discussion here about how
> >nested transactions should not be allowed to FETCH from cursors defined in
> >a wider scope for precisely this reason: to ensure neat transactional
> >behaviour.
> 
> This breaks things like JDBC that want to use cursors to batch access to 

This is a restriction on nested transactions, which aren't even in a real
release yet.  I thought you said you can't break compatibility without
changing the code?  ;)


> I don't like rollback of FETCH for much the same reasons as I don't like 
> rollback of PREPARE -- lots more work on the client side. See my mail on 
> the other thread. Avoiding changing the behaviour of FETCH in the above 
> case is also an argument against it.

In the case of FETCH, where does that extra work come from?


Jeroen


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] strange bug in plperl

2004-07-05 Thread Tom Lane
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> Can anyone suggest why I might be seeing this effect (each notice comes 
> out once per row plus once per function call)

It looks like you're executing the whole function body once per physical
call, which is certainly not a good plan for a function returning set.
Once you get to the RETURN statement, you probably want to stash away
the array value and then just return elements of it on successive calls,
without reexecuting any user code.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] strange bug in plperl

2004-07-05 Thread Andrew Dunstan

Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
 

Can anyone suggest why I might be seeing this effect (each notice comes 
out once per row plus once per function call)
   

It looks like you're executing the whole function body once per physical
call, which is certainly not a good plan for a function returning set.
Once you get to the RETURN statement, you probably want to stash away
the array value and then just return elements of it on successive calls,
without reexecuting any user code.
 

Yep. I had come to that conclusion.
cheers
andrew
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[HACKERS] plperl security

2004-07-05 Thread Andrew Dunstan
There is a known security issue with the perl Safe module versions up to 
and including 2.07 (and 2.08 had a life of 1 day before 2.09 was 
released). see

http://cve.mitre.org/cgi-bin/cvename.cgi?name=CAN-2002-1323
Currently we have this in plperl.c:
 "require Safe;"
I am thinking of submitting a patch to replace this with "use Safe 
2.09;" to enforce use of a version without the known vulnerability.

Any objections?
cheers
andrew

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] [Plperlng-devel] plperl security

2004-07-05 Thread Joshua D. Drake

Currently we have this in plperl.c:
 "require Safe;"
I am thinking of submitting a patch to replace this with "use Safe 
2.09;" to enforce use of a version without the known vulnerability.

Any objections?
I have none, except will 2.09 work with 5.00503?

cheers
andrew
___
Plperlng-devel mailing list
[EMAIL PROTECTED]
http://pgfoundry.org/mailman/listinfo/plperlng-devel

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL
begin:vcard
fn:Joshua D. Drake
n:Drake;Joshua D.
org:Command Prompt, Inc.
adr:;;PO Box 215;Cascade Locks;Oregon;97014;USA
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0034
note:Command Prompt, Inc. is the largest and oldest US based commercial PostgreSQL support provider. We  provide the only commercially viable integrated PostgreSQL replication solution, but also custom programming, and support. We authored  the book Practical PostgreSQL, the procedural language plPHP, and adding trigger capability to plPerl.
x-mozilla-html:FALSE
url:http://www.commandprompt.com/
version:2.1
end:vcard


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] [Plperlng-devel] plperl security

2004-07-05 Thread Andrew Dunstan

Joshua D. Drake wrote:

Currently we have this in plperl.c:
 "require Safe;"
I am thinking of submitting a patch to replace this with "use Safe 
2.09;" to enforce use of a version without the known vulnerability.

Any objections?

I have none, except will 2.09 work with 5.00503?

I will see about getting a test done, unless someone on one of those 
systems with an older perl can try upgrading the Safe module from CPAN 
for us.

cheers
andrew
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] plperl security

2004-07-05 Thread Tom Lane
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> Currently we have this in plperl.c:
>   "require Safe;"
> I am thinking of submitting a patch to replace this with "use Safe 
> 2.09;" to enforce use of a version without the known vulnerability.

This would break both plperl and plperlu on older Perls.  Please see
if you can avoid breaking plperlu.

For that matter, does plperl.c really cope properly with a failure in
this code at all?  I sure don't see anything that looks like error
handling in plperl_init_interp().

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[HACKERS] Recovery Features

2004-07-05 Thread Simon Riggs
I'm looking at a couple of features that are possible to add.

>From what people have said before on list, one of the useful features of
PITR is to recover from rogue transactions.

I have a suggestion that I'd like some considered thought on. I've given
it a couple of weeks thought. My first inclination is that it is cool,
my second that it's pretty stupid, my third, well...?

...While recovering, it is very straightforward to simply ignore every
record associated with one (or more) transactions. That gives us the
ability to recover "all apart from txnid X".

This great because: if you can locate the transactionId you desire (a
problem in itself, but lets not dwell on those complexities for now),
then you blow it away, real smooth - as if it had never existed. Neat,
simple, fast. Dropped tables and removed tablespaces are a bit more of a
problem, but deleted rows, accidental full table updates etc can be
undone in a snap (NOT DDL, in short).

This is awful because: transactions are isolated from each other, but
they also provide changes of state that rely on previous committed
transactions. If you change the past, you could well invalidate the
future. If you blow away a transaction and a later one depends upon it,
then you will have broken the recovery chain and will not be able to
recover to present time. This is only useful IF
i) you know for certain no later transaction has taken place
ii) you know there are later transactions, but you either know they are
invalid because of the rogue, or they're just not as important as the
effects of the rogue.
...but could leave you up to your neck if misused.

I'm thinking:
undo_transactionId = X

Don't flame me, just hold you're fire, think about it and get back to
me. It's a slightly off the wall idea, so if you react quickly, you'll
just say no - and if you don't, go think some more. Anyhow, if not, then
you know who to call when it hits the fan

This also leaves open the door for some more advanced functionality that
blows away the effects of a transaction and all of its time-ordered
descendants, just to make it 100% safe.

Best regards, Simon Riggs


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] [BUGS] [CHECKER] 4 memory leaks in Postgresql 7.4.2

2004-07-05 Thread Alvaro Herrera
On Sun, May 02, 2004 at 07:50:46PM -0400, Tom Lane wrote:

> It's entirely likely that ecpg's derivative of the backend's datetime
> modules contains lots and lots of memory leaks, since AFAIK the palloc
> infrastructure is not there in the ecpg environment :-(.

I wonder why is this?  Is there some limitation to using palloc outside
the backend itself?  I ask because I have considered using it outside
Postgres several times (a consideration that has never materialized
yet), and I wonder if it needs something special to work.

-- 
Alvaro Herrera ()
Dios hizo a Adán, pero fue Eva quien lo hizo hombre.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] Point in Time Recovery

2004-07-05 Thread Simon Riggs
Taking advantage of the freeze bubble allowed us... there are some last
minute features to add. 

Summarising earlier thoughts, with some detailed digging and design from
myself in last few days - we're now in a position to add Point-in-Time
Recovery, on top of whats been achieved.

The target for the last record to recover to can be specified in 2 ways:
- by transactionId - not that useful, unless you have a means of
identifying what has happened from the log, then using that info to
specify how to recover - coming later - not in next few days :( 
- by time - but the time stamp on each xlog record only specifies to the
second, which could easily be 10 or more commits (we hope)

Should we use a different datatype than time_t for the commit timestamp,
one that offers more fine grained differentiation between checkpoints?
If we did, would that be portable?
Suggestions welcome, because I know very little of the details of
various *nix systems and win* on that topic.

Only COMMIT and ABORT records have timestamps, allowing us to circumvent
any discussion about partial transaction recovery and nested
transactions.

When we do recover, stopping at the timestamp is just half the battle.
We need to leave the xlog in which we stop in a state from which we can
enter production smoothly and cleanly. To do this, we could:
- when we stop, keep reading records until EOF, just don't apply them.
When we write a checkpoint at end of recovery, the unapplied
transactions are buried alive, never to return.
- stop where we stop, then force zeros to EOF, so that no possible
record remains of previous transactions.
I'm tempted by the first plan, because it is more straightforward and
stands much less chance of me introducing 50 wierd bugs just before
close.

Also, I think it is straightforward to introduce control file duplexing,
with a second copy stored and maintained in the pg_xlog directory. This
would provide additional protection for pg_control, which takes on more
importance now that archive recovery is working. pg_xlog is a natural
home, since on busy systems it's on its own disk away from everything
else, ensuring that at least one copy survives. I can't see a downside
to that, but others might... We can introduce user specifiable
duplexing, in later releases.

For later, I envisage an off-line utility that can be used to inspect
xlog records. This could provide a number of features:
- validate archived xlogs, to check they are sound.
- produce summary reports, to allow identification of transactionIds and
the effects of particular transactions
- performance analysis to allow decisions to be made about whether group
commit features could be utilised to good effect
(Not now...)

Best regards, Simon Riggs



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


subtransactions and FETCH behaviour (was Re: [HACKERS] PREPARE and transactions)

2004-07-05 Thread Oliver Jowett
Jeroen T. Vermeulen wrote:
That makes me wonder why people want to maintain transactionality w.r.t.
nested transactions but not with "outer" ones.  Odd!
Yep.
But then the FETCH should still occur before the transaction as far as I'm
concerned.  You fetch a batch (if it fails, you terminate) and *try* to
process it.
This is a non-starter for JDBC: it has no control over when an 
application decides to access a ResultSet in a way that results in a 
FETCH of new data.

Buffering *all* the ResultSet data client-side isn't an option -- 
cursors are used specifically to handle resultsets that don't fit into 
heap on the client side. And implementing a disk cache or similar a) 
doesn't work if you don't have disk access, b) is bandwidth-intensive 
and c) is really silly -- that's work that belongs on the server side, 
or why bother with implementing cursors at all?!

Invalidating all open resultsets on creation of a savepoint would make 
savepoints useless in many cases, and isn't hinted at in the JDBC spec 
for savepoints so is likely to break many otherwise portable apps.

Having ResultSets spontaneously change position on transaction 
boundaries would cause even more portability problems -- and it goes 
completely against how that API is designed (it's meant to *insulate* 
the application from details like cursors that may be used behind the 
scenes).

I don't like rollback of FETCH for much the same reasons as I don't like 
rollback of PREPARE -- lots more work on the client side. See my mail on 
the other thread. Avoiding changing the behaviour of FETCH in the above 
case is also an argument against it.

In the case of FETCH, where does that extra work come from?
See my other email. The driver will either have to use SCROLL cursors 
and FETCH ABSOLUTE everywhere (which involves an extra Materialize step 
in the plan for nontrivial queries) or track each open cursor's position 
at the start of every active subtransaction so it can restore that 
information on rollback. The driver needs to track where the server 
thinks the cursor is positioned so it can do an appropriate FETCH or 
Execute when the application requests data in a resultset that's not 
currently available on the client side.

Reporting the new cursor positions at the protocol level when rollback 
happens might help but it's still fairly ugly and would need a protocol 
version change.

Also consider that the V3 protocol Execute message is essentially a 
FETCH (you can only do FETCH FORWARD count, but it's otherwise 
equivalent). This is another case of overlap between the SQL level and 
the protocol level and has much of the same problems as we have with 
PREPARE vs. Bind/Execute. The protocol-level portal state shouldn't 
suddenly change on a transaction boundary. I can understand closing 
nonholdable portals when the creating transaction closes (the data 
source just disappeared) but having the portal change *position* would 
be very weird.

-O
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [HACKERS] plperl security

2004-07-05 Thread Andrew Dunstan

Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
 

Currently we have this in plperl.c:
 "require Safe;"
I am thinking of submitting a patch to replace this with "use Safe 
2.09;" to enforce use of a version without the known vulnerability.
   

This would break both plperl and plperlu on older Perls.  Please see
if you can avoid breaking plperlu.
For that matter, does plperl.c really cope properly with a failure in
this code at all?  I sure don't see anything that looks like error
handling in plperl_init_interp().
 

I will look at it. It will probably require some non-trivial rework.
I do agree that we should not break more old stuff than is necessary.
cheers
andrew
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] [BUGS] [CHECKER] 4 memory leaks in Postgresql 7.4.2

2004-07-05 Thread Bruce Momjian
Alvaro Herrera wrote:
> On Sun, May 02, 2004 at 07:50:46PM -0400, Tom Lane wrote:
> 
> > It's entirely likely that ecpg's derivative of the backend's datetime
> > modules contains lots and lots of memory leaks, since AFAIK the palloc
> > infrastructure is not there in the ecpg environment :-(.
> 
> I wonder why is this?  Is there some limitation to using palloc outside
> the backend itself?  I ask because I have considered using it outside
> Postgres several times (a consideration that has never materialized
> yet), and I wonder if it needs something special to work.

The semantics of palloc is that most stuff is freed on statement
completion.  In most cases, interfaces need different semantics so we
haven't seen much need for making something like palloc available to
clients.  I can see ecpg using it in a few cases, and libpq too, but
probably not enough to make it worthwhile.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: subtransactions and FETCH behaviour (was Re: [HACKERS] PREPARE and transactions)

2004-07-05 Thread Jeroen T. Vermeulen
On Tue, Jul 06, 2004 at 08:45:52AM +1200, Oliver Jowett wrote:
 
> This is a non-starter for JDBC: it has no control over when an 
> application decides to access a ResultSet in a way that results in a 
> FETCH of new data.
 
>From what you're telling me, I'm not sure I like JDBC!  Why did they come
up with such a low-level design?  Sounds like little more than a thin ODBC
wrapper plus JVM marshalling...


> Buffering *all* the ResultSet data client-side isn't an option -- 
> cursors are used specifically to handle resultsets that don't fit into 
> heap on the client side. And implementing a disk cache or similar a) 
> doesn't work if you don't have disk access, b) is bandwidth-intensive 
> and c) is really silly -- that's work that belongs on the server side, 
> or why bother with implementing cursors at all?!

But does this type of ResultSet scroll cursors?  Because in that case, it
should be easy to reset the cursor's position at rollback!  Not fast
perhaps, but easy.  Screw fast when you're rolling back, because you'll
have other things to worry about.

Okay, I know, you might not _want_ to reset on rollback.  But it does give
the middleware a lot more freedom to play with connections etc. like we
discussed before.  So personally, if it meant that I had to support
rollbacks, I would think it was a small price to pay for full ACID
guarantees.


> Having ResultSets spontaneously change position on transaction 
> boundaries would cause even more portability problems -- and it goes 
> completely against how that API is designed (it's meant to *insulate* 
> the application from details like cursors that may be used behind the 
> scenes).
 
Are you saying this is not something you'd be able to hide in the driver?


> Reporting the new cursor positions at the protocol level when rollback 
> happens might help but it's still fairly ugly and would need a protocol 
> version change.
 
It would be nice IMHO to have a "tell" function for cursors, giving the
enumerated current position of the cursor.  I can fake that by counting
rows, in fact I've already done that, but it's not pretty and it easily
gets confused with the lower isolation levels (which fortunately Postgres
doesn't have).


> Also consider that the V3 protocol Execute message is essentially a 
> FETCH (you can only do FETCH FORWARD count, but it's otherwise 
> equivalent). This is another case of overlap between the SQL level and 
> the protocol level and has much of the same problems as we have with 
> PREPARE vs. Bind/Execute. The protocol-level portal state shouldn't 
> suddenly change on a transaction boundary. I can understand closing 
> nonholdable portals when the creating transaction closes (the data 
> source just disappeared) but having the portal change *position* would 
> be very weird.

You're beginning to convince me that maybe ACID for transactions in
postgres is unsalvageable and we should be thinking about some alternative,
such as ways of finding out whether ACID still applies to the current
transaction, and/or whether the current statement will change that...


Jeroen


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Recovery Features

2004-07-05 Thread Tom Lane
Simon Riggs <[EMAIL PROTECTED]> writes:
> ...While recovering, it is very straightforward to simply ignore every
> record associated with one (or more) transactions. That gives us the
> ability to recover "all apart from txnid X".

Don't even *think* of going there.

What will happen when transaction Y comes along and wants to modify or
delete a row that was inserted by X?  There's no chance of staying
consistent.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] [BUGS] [CHECKER] 4 memory leaks in Postgresql 7.4.2

2004-07-05 Thread Alvaro Herrera
On Mon, Jul 05, 2004 at 05:13:23PM -0400, Bruce Momjian wrote:
> Alvaro Herrera wrote:
> > On Sun, May 02, 2004 at 07:50:46PM -0400, Tom Lane wrote:
> > 
> > > It's entirely likely that ecpg's derivative of the backend's datetime
> > > modules contains lots and lots of memory leaks, since AFAIK the palloc
> > > infrastructure is not there in the ecpg environment :-(.
> > 
> > I wonder why is this?  Is there some limitation to using palloc outside
> > the backend itself?  I ask because I have considered using it outside
> > Postgres several times (a consideration that has never materialized
> > yet), and I wonder if it needs something special to work.
> 
> The semantics of palloc is that most stuff is freed on statement
> completion.  In most cases, interfaces need different semantics so we
> haven't seen much need for making something like palloc available to
> clients.  I can see ecpg using it in a few cases, and libpq too, but
> probably not enough to make it worthwhile.

Yes, I understand that part -- what I was talking about was not using
the code in the Pg interfaces, but in another software project which
also consists of a daemon that has several well defined "durations" of
objects.  In that (as of yet unwritten) code, palloc would fit very
well.  But does palloc depend on some other part of the Postgres code?

-- 
Alvaro Herrera ()
"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte"
(Ijon Tichy en Viajes, Stanislaw Lem)


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] [BUGS] [CHECKER] 4 memory leaks in Postgresql 7.4.2

2004-07-05 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> I wonder why is this?  Is there some limitation to using palloc outside
> the backend itself?  I ask because I have considered using it outside
> Postgres several times (a consideration that has never materialized
> yet), and I wonder if it needs something special to work.

It's useless without the rest of the backend infrastructure, including
elog error recovery and a cooperative main loop.  You could certainly
reuse the code in a different application if you were willing to adopt
Postgres' memory management and error handling approaches
lock-stock-and-barrel, but I don't think you could be selective about
it.  For a library that has to work inside a not-so-cooperative
application, the idea is a nonstarter.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Point in Time Recovery

2004-07-05 Thread Tom Lane
Simon Riggs <[EMAIL PROTECTED]> writes:
> Should we use a different datatype than time_t for the commit timestamp,
> one that offers more fine grained differentiation between checkpoints?

Pretty much everybody supports gettimeofday() (time_t and separate
integer microseconds); you might as well use that.  Note that the actual
resolution is not necessarily microseconds, and it'd still not be
certain that successive commits have distinct timestamps --- so maybe
this refinement would be pointless.  You'll still have to design a user
interface that allows selection without the assumption of distinct
timestamps.

> - when we stop, keep reading records until EOF, just don't apply them.
> When we write a checkpoint at end of recovery, the unapplied
> transactions are buried alive, never to return.
> - stop where we stop, then force zeros to EOF, so that no possible
> record remains of previous transactions.

Go with plan B; it's best not to destroy data (what if you chose the
wrong restart point the first time)?

Actually this now reminds me of a discussion I had with Patrick
Macdonald some time ago.  The DB2 practice in this connection is that
you *never* overwrite existing logfile data when recovering.  Instead
you start a brand new xlog segment file, which is given a new "branch
number" so it can be distinguished from the future-time xlog segments
that you chose not to apply.  I don't recall what the DB2 terminology
was exactly --- not "branch number" I don't think --- but anyway the
idea is that when you restart the database after an incomplete recovery,
you are now in a sort of parallel universe that has its own history
after the branch point (PITR stop point).  You need to be able to
distinguish archived log segments of this parallel universe from those
of previous and subsequent incarnations.  I'm not sure whether Vadim
intended our StartUpID to serve this purpose, but it could perhaps be
used that way, if we reflected it in the WAL file names.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: subtransactions and FETCH behaviour (was Re: [HACKERS] PREPARE and transactions)

2004-07-05 Thread Alvaro Herrera
On Tue, Jul 06, 2004 at 08:45:52AM +1200, Oliver Jowett wrote:

> Buffering *all* the ResultSet data client-side isn't an option -- 
> cursors are used specifically to handle resultsets that don't fit into 
> heap on the client side. And implementing a disk cache or similar a) 
> doesn't work if you don't have disk access, b) is bandwidth-intensive 
> and c) is really silly -- that's work that belongs on the server side, 
> or why bother with implementing cursors at all?!

Well, the proposal of implementing it like holdable cursors means using
a Materialize node which, if I understand correctly, means taking the
whole result set and storing it on memory (or disk).  So the same
question arises: why bother implementing that at all?  Of course the
answer is that the server definitely _has_ to provide the functionality.

Now, the cursor problem is beyond me ATM -- it needs deep understanding
of the executor code that I do not have and won't be able to develop in
two weeks ... if there's no reasonable solution in sight maybe the best
we can do is revert the whole nested xacts patch (or at least disable
the funcionality) so we have more time to solve this particular problem.

Sadly, AFAICS this is the only major problem with the functionality, so
it would be a pity to throw away all work only for this.

-- 
Alvaro Herrera ()
"I call it GNU/Linux. Except the GNU/ is silent." (Ben Reiter)


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] plperl security

2004-07-05 Thread Tom Lane
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> The thing is that unlike TCL we have one interpreter for both trusted 
> and untrusted cases.

> My thinking is to factor out all the code that only applies to trusted 
> cases from the interpreter init code, and only call it if we try to 
> compile a trusted function and it hasn't been run yet. Does that seem 
> reasonable?

That would work.  You'd need two state flags instead of just one, but
that doesn't seem bad.

Recovering when you get an error is probably the trickiest part of this.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] strange bug in plperl

2004-07-05 Thread elein
I just reproduced this problem when returning
a composite and NOT as SETOF composite.

An assumption is being made that if the return
value is a composite, that it must be part of a set.
This is incorrect.

Test case available on request--if you don't have
one already.

Spoke with Andrew wrt on #postgresql.

--elein

On Mon, Jul 05, 2004 at 12:28:32PM -0400, Andrew Dunstan wrote:
> 
> 
> Tom Lane wrote:
> 
> >Andrew Dunstan <[EMAIL PROTECTED]> writes:
> > 
> >
> >>Can anyone suggest why I might be seeing this effect (each notice comes 
> >>out once per row plus once per function call)
> >>   
> >>
> >
> >It looks like you're executing the whole function body once per physical
> >call, which is certainly not a good plan for a function returning set.
> >Once you get to the RETURN statement, you probably want to stash away
> >the array value and then just return elements of it on successive calls,
> >without reexecuting any user code.
> >
> >
> > 
> >
> 
> Yep. I had come to that conclusion.
> 
> cheers
> 
> andrew
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
>   http://archives.postgresql.org

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] plperl security

2004-07-05 Thread Andrew Dunstan

Andrew Dunstan wrote:

Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
 

Currently we have this in plperl.c:
 "require Safe;"
I am thinking of submitting a patch to replace this with "use Safe 
2.09;" to enforce use of a version without the known vulnerability.
  

This would break both plperl and plperlu on older Perls.  Please see
if you can avoid breaking plperlu.
For that matter, does plperl.c really cope properly with a failure in
this code at all?  I sure don't see anything that looks like error
handling in plperl_init_interp().
 

I will look at it. It will probably require some non-trivial rework.
I do agree that we should not break more old stuff than is necessary.

The thing is that unlike TCL we have one interpreter for both trusted 
and untrusted cases.

My thinking is to factor out all the code that only applies to trusted 
cases from the interpreter init code, and only call it if we try to 
compile a trusted function and it hasn't been run yet. Does that seem 
reasonable?

The code in question would be:
always in interp init:
   SPI::bootstrap(); use vars qw(%_SHARED);
   sub ::mkunsafefunc {return eval(qq[ sub { $_[0] $_[1] } ]); }
only if needed for trusted cases:
   use Safe 2.09;
   use vars qw($PLContainer); $PLContainer = new Safe('PLPerl');
   
$PLContainer->permit_only(':default');$PLContainer->permit(':base_math');
   $PLContainer->share(qw[&elog &spi_exec_query &DEBUG &LOG &INFO 
&NOTICE &WARNING &ERROR %SHARED ]);
   sub ::mksafefunc { return $PLContainer->reval(qq[sub { $_[0] 
$_[1]}]); }
  
Still looking at robustness issues.

cheers
andrew
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] plperl security

2004-07-05 Thread Andrew Dunstan

Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
 

The thing is that unlike TCL we have one interpreter for both trusted 
and untrusted cases.
   

 

My thinking is to factor out all the code that only applies to trusted 
cases from the interpreter init code, and only call it if we try to 
compile a trusted function and it hasn't been run yet. Does that seem 
reasonable?
   

That would work.  You'd need two state flags instead of just one, but
that doesn't seem bad.
 

2?
'splain please :-)
cheers
andrew

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [HACKERS] plperl security

2004-07-05 Thread Tom Lane
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> That would work.  You'd need two state flags instead of just one, but
>> that doesn't seem bad.

> 'splain please :-)

Maybe you weren't thinking of the same thing, but what I was imagining
was one state flag to remember that you'd created the interpreter (and
loaded the unsafe-func support into it), then a second one to remember
whether you've loaded the safe-func support.  There are various ways to
represent this of course, but the point is there need to be three
persistent states.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Recovery Features

2004-07-05 Thread Simon Riggs
On Mon, 2004-07-05 at 22:30, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > ...While recovering, it is very straightforward to simply ignore every
> > record associated with one (or more) transactions. That gives us the
> > ability to recover "all apart from txnid X".
> 
> Don't even *think* of going there.

Hmmm... thinking is important, as are differing viewpoints. I value
yours and those of everyone else on this list, hence the post.

> What will happen when transaction Y comes along and wants to modify or
> delete a row that was inserted by X?  There's no chance of staying
> consistent.

I did point out this downside...a few sentences down.
**This is awful because: transactions are isolated from each other, but
they also provide changes of state that rely on previous committed
transactions. If you change the past, you could well invalidate the
future. If you blow away a transaction and a later one depends upon it,
then you will have broken the recovery chain and will not be able to
recover to present time.**

Theoretically, this is a disaster area.

Practically, Oracle10g provides similar-ish features...

...Nobody is shouting YES, so its a dodo...

Best regards, Simon Riggs


---(end of broadcast)---
TIP 8: explain analyze is your friend


[HACKERS] Security...

2004-07-05 Thread Steve Holdoway
Hi folks,
I'm trying to seriously restrict what a particular user can see within a 
database. Using various schemas, rules, groups and grants, I've managed 
to restrict them to only being able to _do_ what I want them to, but 
they can still _see_ the rest of the schemas, system tables, etc. I've 
tried revoking everything on public, pg_catalogs, etc, but you can still 
describe tables.

Anyone know how to stop this, or if it's even possible??
Cheers,
Steve.
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] plperl security

2004-07-05 Thread Andrew Dunstan

Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
 

Tom Lane wrote:
   

That would work.  You'd need two state flags instead of just one, but
that doesn't seem bad.
 

 

'splain please :-)
   

Maybe you weren't thinking of the same thing, but what I was imagining
was one state flag to remember that you'd created the interpreter (and
loaded the unsafe-func support into it), then a second one to remember
whether you've loaded the safe-func support.  There are various ways to
represent this of course, but the point is there need to be three
persistent states.
 

Ahh, ok. We already have a state var to remember the first part 
(plperl_firstcall). Just need one new one I think.

cheers
andrew
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: subtransactions and FETCH behaviour (was Re: [HACKERS] PREPARE

2004-07-05 Thread Oliver Jowett
Alvaro Herrera wrote:
Well, the proposal of implementing it like holdable cursors means using
a Materialize node which, if I understand correctly, means taking the
whole result set and storing it on memory (or disk).  So the same
question arises: why bother implementing that at all?  Of course the
answer is that the server definitely _has_ to provide the functionality.
It seems more reasonable to implement this on the server side -- it 
already has the data to hand (not on the other side of a network 
connection) and is much more likely to have memory/disk available.

Now, the cursor problem is beyond me ATM -- it needs deep understanding
of the executor code that I do not have and won't be able to develop in
two weeks ... if there's no reasonable solution in sight maybe the best
we can do is revert the whole nested xacts patch (or at least disable
the funcionality) so we have more time to solve this particular problem.
Sadly, AFAICS this is the only major problem with the functionality, so
it would be a pity to throw away all work only for this.
Is there an approach that means we can do *something* sane with cursors 
and keep nested transactions? Something like "close all non-hold cursors 
on transaction start"? I think the JDBC driver can pass this restriction 
on to the application if we document it -- "creating a savepoint or 
starting a new subtransaction invalidates all open resultsets" -- as a 
necessary limitation of the current backend's implementation. I do think 
that this will make savepoints useless in many cases, but it's better 
than not having subtransactions at all.

Then maybe better cursor support can be done for 7.6?
-O
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Recovery Features

2004-07-05 Thread Mike Mascari
Simon Riggs wrote:
On Mon, 2004-07-05 at 22:30, Tom Lane wrote:
...Nobody is shouting YES, so its a dodo...
I can imagine a scenario where the junior DBA accidentally deletes 
all rows from some obscure table that wouldn't have logical 
implications for later transactions. But I suspect most people would 
perform recovery in a separate instance, and just hand-dump/restore 
the table in question. The point at which the above process becomes 
too complex (or less than obvious) for hand-recovery is precisely 
when unforeseen consequences of nixing a single transaction become 
too great.

IMHO,
Mike Mascari

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] Security...

2004-07-05 Thread Andrew Dunstan

Steve Holdoway wrote:
Hi folks,
I'm trying to seriously restrict what a particular user can see within 
a database. Using various schemas, rules, groups and grants, I've 
managed to restrict them to only being able to _do_ what I want them 
to, but they can still _see_ the rest of the schemas, system tables, 
etc. I've tried revoking everything on public, pg_catalogs, etc, but 
you can still describe tables.

Anyone know how to stop this, or if it's even possible??

Here's an extract from a half-written article on this subject. No 
guarantees - use at your own risk! Enjoy.

cheers
andrew
We start off by defining a user group called pspublic, which will 
contain the DBA and APIowner users.

We'll revoke any privileges on the key schemas and certain critical 
system catalog tables from public (which means any user) and then grant 
them back only to the pspublic group.

This is security by obscurity to some extent. Security professionals 
tend to sneer at such measures. If we were relying on this as the main 
means of ensuring our security it would indeed be a poor set of 
measures. However, it is not our main security defense, and it does 
provide some protection against intruders who are unable to obtain the 
information by other means (e.g. by bribing a former member of your 
development staff). The relations I have chosen to hide are those 
catalog tables and views that contain the names of database objects.

Still running as the superuser, connect to the new database and run 
these commands:

create group pspublic with user accountsdba, apiowner;
revoke all on schema pg_catalog, public, information_schema from public;
grant  usage on schema pg_catalog,information_schema to group pspublic;
grant all on schema public to group pspublic;
revoke select on table
pg_am, pg_attribute, pg_class, pg_constraint, pg_conversion, pg_database,
pg_group, pg_indexes, pg_language, pg_listener, pg_namespace, pg_opclass,
pg_operator, pg_proc, pg_rewrite, pg_rules, pg_stat_activity,
pg_stat_all_indexes, pg_stat_all_tables, pg_stat_database, 
pg_stat_sys_indexes, pg_stat_sys_tables, pg_stat_user_indexes,
pg_stat_user_tables, pg_statio_all_indexes, pg_statio_all_sequences,
pg_statio_all_tables, pg_statio_sys_indexes, pg_statio_sys_sequences,
pg_statio_sys_tables, pg_statio_user_indexes, pg_statio_user_sequences,
pg_statio_user_tables, pg_stats, pg_tables, pg_trigger, pg_type, pg_user,
pg_views 
from public; 
grant select on table
pg_am, pg_attribute, pg_class, pg_constraint, pg_conversion, pg_database,
pg_group, pg_indexes, pg_language, pg_listener, pg_namespace, pg_opclass,
pg_operator, pg_proc, pg_rewrite, pg_rules, pg_stat_activity,
pg_stat_all_indexes, pg_stat_all_tables, pg_stat_database, 
pg_stat_sys_indexes, pg_stat_sys_tables, pg_stat_user_indexes,
pg_stat_user_tables, pg_statio_all_indexes, pg_statio_all_sequences,
pg_statio_all_tables, pg_statio_sys_indexes, pg_statio_sys_sequences,
pg_statio_sys_tables, pg_statio_user_indexes, pg_statio_user_sequences,
pg_statio_user_tables, pg_stats, pg_tables, pg_trigger, pg_type, pg_user,
pg_views 
to group pspublic; 
revoke select, update on table pg_settings from public; 
grant select,update on table pg_settings to group pspublic;  
 

Now webuser can see nothing of any value, unless we grant explicit 
access, as you should be able to verify.



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Point in Time Recovery

2004-07-05 Thread Simon Riggs
On Mon, 2004-07-05 at 22:46, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > Should we use a different datatype than time_t for the commit timestamp,
> > one that offers more fine grained differentiation between checkpoints?
> 
> Pretty much everybody supports gettimeofday() (time_t and separate
> integer microseconds); you might as well use that.  Note that the actual
> resolution is not necessarily microseconds, and it'd still not be
> certain that successive commits have distinct timestamps --- so maybe
> this refinement would be pointless.  You'll still have to design a user
> interface that allows selection without the assumption of distinct
> timestamps.

Well, I agree, though without the desired-for UI now, I think some finer
grained mechanism would be good. This means extending the xlog commit
record by a couple of bytes...OK, lets live a little.

> > - when we stop, keep reading records until EOF, just don't apply them.
> > When we write a checkpoint at end of recovery, the unapplied
> > transactions are buried alive, never to return.
> > - stop where we stop, then force zeros to EOF, so that no possible
> > record remains of previous transactions.
> 
> Go with plan B; it's best not to destroy data (what if you chose the
> wrong restart point the first time)?
> 

eh? Which way round? The second plan was the one where I would destroy
data by overwriting it, thats exactly why I preferred the first.

Actually, the files are always copied from archive, so re-recovery is
always an available option in the design thats been implemented.

No matter...

> Actually this now reminds me of a discussion I had with Patrick
> Macdonald some time ago.  The DB2 practice in this connection is that
> you *never* overwrite existing logfile data when recovering.  Instead
> you start a brand new xlog segment file, 

Now thats a much better plan...I suppose I just have to rack up the
recovery pointer to the first record on the first page of a new xlog
file, similar to first plan, but just fast-forwarding rather than
forwarding.

My only issue was to do with the secondary Checkpoint marker, which is
always reset to the place you just restored FROM, when you complete a
recovery. That could lead to a situation where you recover, then before
next checkpoint, fail and lose last checkpoint marker, then crash
recover from previous checkpoint (again), but this time replay the
records you were careful to avoid.

> which is given a new "branch
> number" so it can be distinguished from the future-time xlog segments
> that you chose not to apply.  I don't recall what the DB2 terminology
> was exactly --- not "branch number" I don't think --- but anyway the
> idea is that when you restart the database after an incomplete recovery,
> you are now in a sort of parallel universe that has its own history
> after the branch point (PITR stop point).  You need to be able to
> distinguish archived log segments of this parallel universe from those
> of previous and subsequent incarnations.  

Thats a good idea, if only because you so easily screw your test data
during multiple recovery situations. But if its good during testing, it
must be good in production too...since you may well perform
recovery...run for a while, then discover that you got it wrong first
time, then need to re-recover again. I already added that to my list of
gotchas and that would solve it.

I was going to say hats off to the Blue-hued ones, when I remembered
this little gem from last year
http://www.danskebank.com/link/ITreport20030403uk/$file/ITreport20030403uk.pdf

> I'm not sure whether Vadim
> intended our StartUpID to serve this purpose, but it could perhaps be
> used that way, if we reflected it in the WAL file names.
> 

Well, I'm not sure about StartUpIdbut certainly the high 2 bytes of
LogId looks pretty certain never to be anything but zeros. You have 2.4
x 10^14...which is 9,000 years at 1000 log file/sec
We could use the scheme you descibe:
add x to the logid every time you complete an archive recovery...so
the log files look like 00010CE3 after youve recovered a load of
files that look like 0CE3

If you used StartUpID directly, you might just run outbut its very
unlikely you would ever perform 65000 recovery situations - unless
you've run the  code as often as I have :(.

Doing that also means we don't have to work out how to do that with
StartUpID. Of course, altering the length and makeup of the xlog files
is possible too, but that will cause other stuff to stop working

[We'll have to give this a no-SciFi name, unless we want to make
in-roads into the Dr.Who fanbase :) Don't get them started. Better
still, dont give it a name at all.]

I'll sleep on that lot.

Best regards, Simon Riggs


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Recovery Features

2004-07-05 Thread Simon Riggs
On Mon, 2004-07-05 at 23:40, Mike Mascari wrote:
> Simon Riggs wrote:
> 
> > 
> > ...Nobody is shouting YES, so its a dodo...
> 

> The point at which the above process becomes 
> too complex (or less than obvious) for hand-recovery is precisely 
> when unforeseen consequences of nixing a single transaction become 
> too great.
> 

Agreed. The potential for unforeseen consequences is just too high, and
although I'm fairly sure they would always be spotted during recovery
and cause an error - I think it is something that requires proof. And I
don't have that. So, lets leave that idea alone for 100 years.

> ... hand-recovery ...

hmmm...not sure I know what you mean.

It is very-very-close-to-impossible to edit the transaction logs
manually, unless some form of special-format editor were written for the
purpose.

Is it clear that the PITR features are completely different from
pg_dump? (Which would allow a manual edit and recover). The xlogs are
binary files that refer to all changes to all tables in a cluster
ordered by time, rather than by table.

Best regards, Simon Riggs


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Recovery Features

2004-07-05 Thread Mike Mascari
Simon Riggs wrote:
On Mon, 2004-07-05 at 23:40, Mike Mascari wrote:
 
hmmm...not sure I know what you mean.

It is very-very-close-to-impossible to edit the transaction logs
manually, unless some form of special-format editor were written for the
purpose.
Is it clear that the PITR features are completely different from
pg_dump? (Which would allow a manual edit and recover). The xlogs are
binary files that refer to all changes to all tables in a cluster
ordered by time, rather than by table.
What I meant by hand-restore was
1. A total backup occurrs on Monday morning
2. Transactions occur on Monday, Tuesday and Wednesday, with PITR 
archiving enabled

3. Intern deletes everyting from obscure_table on Thursday afternoon 
and wants to restore it as it was on Thursday morning

4. On some other machine, the total backup is restored into a new 
cluster, the transaction logs replayed to that point-in-time where 
intern deleted everything from obscure_table

5. The table is dumped manually and restored in the production 
database, because it is known that this table has no logicial 
implications for the consisetency of other tables.

That's what I meant by hand-restore.
Mike Mascari

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] [BUGS] [CHECKER] 4 memory leaks in Postgresql 7.4.2

2004-07-05 Thread Gaetano Mendola
Alvaro Herrera wrote:
On Mon, Jul 05, 2004 at 05:13:23PM -0400, Bruce Momjian wrote:
Alvaro Herrera wrote:
On Sun, May 02, 2004 at 07:50:46PM -0400, Tom Lane wrote:

It's entirely likely that ecpg's derivative of the backend's datetime
modules contains lots and lots of memory leaks, since AFAIK the palloc
infrastructure is not there in the ecpg environment :-(.
I wonder why is this?  Is there some limitation to using palloc outside
the backend itself?  I ask because I have considered using it outside
Postgres several times (a consideration that has never materialized
yet), and I wonder if it needs something special to work.
The semantics of palloc is that most stuff is freed on statement
completion.  In most cases, interfaces need different semantics so we
haven't seen much need for making something like palloc available to
clients.  I can see ecpg using it in a few cases, and libpq too, but
probably not enough to make it worthwhile.

Yes, I understand that part -- what I was talking about was not using
the code in the Pg interfaces, but in another software project which
also consists of a daemon that has several well defined "durations" of
objects.  In that (as of yet unwritten) code, palloc would fit very
well.  But does palloc depend on some other part of the Postgres code?
If you don't mind you can write your application in C++ and use a
boost smartpointer:
http://www.boost.org/libs/smart_ptr/smart_ptr.htm
Regards
Gaetano Mendola


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [HACKERS] Security...

2004-07-05 Thread Simon Riggs
On Mon, 2004-07-05 at 23:27, Steve Holdoway wrote:
> Hi folks,
> 
> I'm trying to seriously restrict what a particular user can see within a 
> database. Using various schemas, rules, groups and grants, I've managed 
> to restrict them to only being able to _do_ what I want them to, but 
> they can still _see_ the rest of the schemas, system tables, etc. I've 
> tried revoking everything on public, pg_catalogs, etc, but you can still 
> describe tables.
> 
> Anyone know how to stop this, or if it's even possible??
> 

I think there was some discussion on this on the ODBC list.

Teradata and Oracle use views that have a subselect in them that only
displays objects that a user has at least one privilege on/over.

In Oracle, they're called ALL and USER views, so there are multiple
versions of the schema depending upon your (security) needs. Teradata
gives you the option at system init time.

Currently, psql issues complex SQL directly against the catalog, though
I did once have plans to rework that so the same commands would be
available from any interface.

Best regards, Simon Riggs


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] [COMMITTERS] pgsql-server: plperl update from Andrew Dunstan,

2004-07-05 Thread Andrew Dunstan
[redirected to -hackers]
Tom Lane wrote:
[EMAIL PROTECTED] (Joe Conway) writes:
 

plperl update from Andrew Dunstan, deriving (I believe) from Command Prompt's
plperlNG. Review and minor cleanup/improvements by Joe Conway.
   

I've reverted the part of this patch that attempted to make plperl use
rpath, because it doesn't actually work...
 

[snip]
-Wl is a gcc-ism, I believe; it certainly won't work in a direct call to
ld.  Looking at the port makefiles, HPUX is not the only platform that
will barf on this.
 

Is there a more portable way of doing this, via configure maybe?
cheers
andrew
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] [COMMITTERS] pgsql-server: plperl update from Andrew Dunstan, deriving (I believe)

2004-07-05 Thread Tom Lane
Andrew Dunstan <[EMAIL PROTECTED]> writes:
>> I've reverted the part of this patch that attempted to make plperl use
>> rpath, because it doesn't actually work...

> Is there a more portable way of doing this, via configure maybe?

It doesn't look to me like we handle rpath switch syntax via configure,
but by setting variables in the port-specific makefiles.  Maybe you
could extend that scheme so that the switch name is available separately
from $(libdir).

The HPUX case is slightly peculiar because we essentially force rpath to
be used always; enable_rpath is disregarded and we stick what would be
the rpath switches directly into LDFLAGS.  It might be cleaner if it
did this by setting enable_rpath true in the port makefile, instead of
bypassing the rpath variable mechanism altogether.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Nested Transactions, Abort All

2004-07-05 Thread Alvaro Herrera
On Sat, Jul 03, 2004 at 02:32:44AM -0500, Thomas Swan wrote:
> Alvaro Herrera wrote:

> >What I'd like to do is start the transaction block before the function
> >is called if we are not in a transaction block.  This would mean that
> >when the function calls BEGIN it won't be the first one -- it will
> >actually start a subtransaction and will be able to end it without harm.
> >I think this can be done automatically at the SPI level.
>
> Please tell me there is some sanity in this.   If I follow you
> correctly, at no point should anyone be able to issue an explicit
> begin/end because they are already in an explicit/implicit transaction
> by default...  How is the user/programmer to know when this is the case?

I'm not sure I understand you.  Of course you can issue begin/end.  What
you can't do is issue begin/end inside a function -- you always use
subbegin/subcommit in that case.

-- 
Alvaro Herrera ()
"La espina, desde que nace, ya pincha" (Proverbio africano)


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] compile errors in new PL/Pler

2004-07-05 Thread Marc G. Fournier
On Fri, 2 Jul 2004, Joshua D. Drake wrote:

So we have three choices as I see it:
1) revert the change
2) require some minimally recent version of perl
3) fix the issue in place
Preferences?
Joshua/Andrew -- do you want to take a shot at making this work on perl 
5.00503?
I personally don't have any desire to make this work on a version of perl 
that the perl community itself suggests that you should upgrade.
Well, backing it out is always an option, since the old version apparently 
did work *shrug*

Personally, rather have the new on in there though ... :)

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Bug with view definitions?

2004-07-05 Thread Christopher Kings-Lynne
 need_paren = (PRETTY_PAREN(context) ?
   !IsA(op->rarg, RangeTblRef) : true);

In a quick glance this code seems close to completely brain dead :-(
For one thing, why isn't it making separate determinations about whether
the left and right inputs of the UNION (resp INTERSECT or EXCEPT)
operator need to be parenthesized?  After that maybe we could figure out
what the individual decisions need to be.
So what are we going to do about it?
Was it one of the pgAdmin guys who wrote it in the first place?
Chris
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Bug with view definitions?

2004-07-05 Thread Tom Lane
Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
>> In a quick glance this code seems close to completely brain dead :-(

> So what are we going to do about it?

Fix it, of course.

(Note I only fixed the parenthesization, I take no responsibility for
the still-pretty-brain-dead layout.)

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Bug with view definitions?

2004-07-05 Thread Christopher Kings-Lynne
So what are we going to do about it?

Fix it, of course.
I just wanted to make sure it happened :)
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Nested Transactions, Abort All

2004-07-05 Thread Dennis Bjorklund
On Mon, 5 Jul 2004, Alvaro Herrera wrote:

> > begin/end because they are already in an explicit/implicit transaction
> > by default...  How is the user/programmer to know when this is the case?
> 
> I'm not sure I understand you.  Of course you can issue begin/end.  What
> you can't do is issue begin/end inside a function -- you always use
> subbegin/subcommit in that case.

I've not understood why we need new tokens for this case. Maybe you've 
explained it somewhere that I've missed. But surely the server know if you 
are in a transaction or not, and can differentiate on the first BEGIN and 
the next BEGIN.

-- 
/Dennis Björklund


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings