[BUGS] Unable to make DBD-Pg-2.13.1

2009-05-01 Thread Mark

I'm new to PostgreSQL. Just installed the latest version, 8.3.7, on my
FreeBSD system (client + server). All went well...

Until I tried to install DBD-Pg-2.13.1 (for Perl) thereafter. And I'm
getting the weirdest errors that just won't go away (see below). Does
anyone know what the cause might be? Or better still, how to fix it? :)

Thanks,

- Mark


Vmware: {root} % /usr/local/bin/perl Makefile.PL
Configuring DBD::Pg 2.13.1
PostgreSQL version: 80307 (default port: 5432)
POSTGRES_HOME: /usr/local/PostgreSQL
POSTGRES_INCLUDE: /usr/local/PostgreSQL/include
POSTGRES_LIB: /usr/local/PostgreSQL/lib -lssl -lcrypto
OS: freebsd
Using DBI 1.607 (for perl 5.008008 on i386-freebsd-64int) installed in 
/usr/local/standard-perl/lib/perl5/site_perl/5.8.8/mach/auto/DBI/

Writing Makefile for DBD::Pg
Vmware: {root} % make
cc -c  -I/usr/local/PostgreSQL/include -I/usr/local/standard-perl/lib/perl5/site_perl/5.8.8/mach/auto/DBI 
 -DAPPLLIB_EXP="/usr/local/standard-perl/lib/perl5/5.8.8/BSDPAN" -DHAS_FPSETMASK 
-DHAS_FLOATINGPOINT_H -fno-strict-aliasing -pipe -I/usr/local/include -DPGLIBVERSION=80307 
-DPGDEFPORT=5432 -O -pipe   -DPERL_EXTMALLOC_DEF -Dmalloc=Perl_malloc -Dfree=Perl_mfree 
-Drealloc=Perl_realloc -Dcalloc=Perl_calloc -DVERSION=\"2.13.1\"  -DXS_VERSION=\"2.13.1\" 
-DPIC -fPIC "-I/usr/local/standard-perl/lib/perl5/5.8.8/mach/CORE"   Pg.c
cc -c  -I/usr/local/PostgreSQL/include -I/usr/local/standard-perl/lib/perl5/site_perl/5.8.8/mach/auto/DBI 
 -DAPPLLIB_EXP="/usr/local/standard-perl/lib/perl5/5.8.8/BSDPAN" -DHAS_FPSETMASK 
-DHAS_FLOATINGPOINT_H -fno-strict-aliasing -pipe -I/usr/local/include -DPGLIBVERSION=80307 
-DPGDEFPORT=5432 -O -pipe   -DPERL_EXTMALLOC_DEF -Dmalloc=Perl_malloc -Dfree=Perl_mfree 
-Drealloc=Perl_realloc -Dcalloc=Perl_calloc -DVERSION=\"2.13.1\"  -DXS_VERSION=\"2.13.1\" 
-DPIC -fPIC "-I/usr/local/standard-perl/lib/perl5/5.8.8/mach/CORE" 
dbdimp.c

dbdimp.c: In function `pg_st_prepare_statement':
dbdimp.c:2122: warning: assignment makes pointer from integer without a cast
dbdimp.c: In function `pg_db_cancel':
dbdimp.c:4649: `PGcancel' undeclared (first use in this function)
dbdimp.c:4649: (Each undeclared identifier is reported only once
dbdimp.c:4649: for each function it appears in.)
dbdimp.c:4649: `cancel' undeclared (first use in this function)
dbdimp.c:4650: syntax error before `char'
dbdimp.c:4675: `errbuf' undeclared (first use in this function)
dbdimp.c:4693: `result' undeclared (first use in this function)
dbdimp.c:4700: `status' undeclared (first use in this function)
dbdimp.c: In function `handle_old_async':
dbdimp.c:4762: `PGcancel' undeclared (first use in this function)
dbdimp.c:4762: `cancel' undeclared (first use in this function)
dbdimp.c:4763: syntax error before `char'
dbdimp.c:4769: `cresult' undeclared (first use in this function)
dbdimp.c:4769: `errbuf' undeclared (first use in this function)
*** Error code 1


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Unable to make DBD-Pg-2.13.1

2009-05-01 Thread Mark
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: vrijdag 1 mei 2009 16:37
To: Craig Ringer
Cc: Mark; pgsql-bugs@postgresql.org
Subject: Re: [BUGS] Unable to make DBD-Pg-2.13.1

Craig Ringer  writes:

>> Mark wrote:

>>> Until I tried to install DBD-Pg-2.13.1 (for Perl) thereafter. And I'm
>>> getting the weirdest errors that just won't go away (see below). Does
>>> anyone know what the cause might be? Or better still, how to fix it?

>> Is it possible that you have old or conflicting PostgreSQL headers in
>> /usr/local/include ? Or even in /usr/include ?

> The PGcancel typedef was added in 8.0, so the complaints about that
> suggest strongly that a pre-8.0 version of libpq-fe.h is being read.

Thanks! You're right. :) Though I (thought I) had removed all remnants of
older installs (like the /usr/local/include/postgres/ directory and such),
seems the single /usr/local/include/libpq-fe.h file still lingered on. I
explicitely specified the he pg_config file at its new location, though,
as being at:

/usr/local/PostgreSQL/bin/pg_config

But "/usr/local/PostgreSQL/bin" wasn't on the path at compile time, so
that may have accounted for something, too.

P.S. This morning I already found out about the stray libpq-fe.h file, as
I was building a pgsql.so extension for PHP, and phpinfo.php reported a
7.1.4 client. I did a grep then on all include dirs, and found the old
stub. Oddly enough, I didn't think far enough to realize this might cause
DBD-Pg-2.13.1 to fail to build properly, too. Doh! :)

All is fine again. :)

- Mark


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #4787: Hardlink (ln) causes startup failure with bizarre "timezone_abbreviations" error

2009-05-01 Thread Mark
-Original Message-
From: pgsql-bugs-ow...@postgresql.org
[mailto:pgsql-bugs-ow...@postgresql.org] On Behalf Of Tom Lane
Sent: vrijdag 1 mei 2009 17:46
To: Mark Kramer
Cc: pgsql-bugs@postgresql.org
Subject: Re: [BUGS] BUG #4787: Hardlink (ln) causes startup failure with
bizarre "timezone_abbreviations" error

"Mark Kramer"  writes:

> > I have my PostgreSQL installed in /usr/local/PostgreSQL/ (cleaner for
> > updates, instead of just /usr/local) As a result, I made hard-links
> > like this,

> > cd /usr/local/bin/
> > ln /usr/local/PostgreSQL/bin/pg_ctl pg_ctl

> This isn't going to work because pg_ctl assumes it can find postgres in
> the same directory it is in. Try using a symlink instead.  (It'll be
> less likely to fail miserably after an upgrade, too.)

I tried a symlink as well. Then pg_ctl *can* start the server (which is
kinda odd, by itself, that it can do so now, whereas not with a hardlink;
unless pg_ctl actually reads the symlink content, which is very unlikely),
but it reports a spurious error nonetheless: "could not start server"
(whilst it DOES start the server just fine).

As for pg_ctl assuming it can find postgres in the same directory it is
in, it SHOULD. :) Basically, I hard-linked all files in
/usr/local/PostgreSQL/bin/ to /usr/local/bin/. So, even when pg_ctl got
started from /usr/local/bin/, it should have found /usr/local/bin/postgres
right under its very nose! Also, the error message actually DOES seem to
come from postgres (postgres[9742]: [6-1] FATAL), but that may well be an
optical illusion on my end (as pg_ctl could log as 'postgres' too: haven't
examined that yet).

Clearly, seems PostgreSQL just really wants to be started from its
original install-location.

> > I get this error, though:
> > May  1 04:40:26 asarian-host postgres[9742]: [6-1] FATAL: invalid
> > value for parameter "timezone_abbreviations": "Default"

> I agree this is an odd error message though.  Perhaps you hardlinked a
> few other things you didn't tell us about?  I'm not sure what it would
> take to make this be the first complaint. What is probably happening is
> that postgres is trying to find /usr/local/PostgreSQL/share/ relative
> to itself, but I'd have thought it would notice the problem sooner.

The /share/ thingy is what I strongly suspected too; but since the bug
report FAQ strongly discourages one from writing your assumptions about
what you *think* might be the issue, I refrained from mentioning it. :)
But yes, that seems like a logical place to look.

- Mark


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #4787: Hardlink (ln) causes startup failure with bizarre "timezone_abbreviations" error

2009-05-02 Thread Mark
-Original Message-
From: pgsql-bugs-ow...@postgresql.org
[mailto:pgsql-bugs-ow...@postgresql.org] On Behalf Of Tom Lane
Sent: vrijdag 1 mei 2009 23:57
To: Mark; pgsql-bugs@postgresql.org
Subject: Re: [BUGS] BUG #4787: Hardlink (ln) causes startup failure with
bizarre "timezone_abbreviations" error 

> What I'm inclined to do is modify PostmasterMain so that immediately
> after find_my_exec, it checks that get_share_path returns the name of
> a readable directory.

I understand the rationale for relocatable packages. So, I guess hardlinks
are out. But, barring hardlinks, perhaps, in the existence of a symlink, a
simple 'readlink' function could be done to auto-correct PostgreSQL's
base-location? Ala:

char buf[1024];
ssizet_t len;

if ((len = readlink ("/usr/local/bin/pg_ctl", buf, sizeof(buf)-1)) != -1)
buf[len] = '\0';

Symlinks are used quite often, *especially* when dealing with relocatable
packages (read: that will likely not reside in /usr/local/, etc.). And it
would only requires two or three extra lines of code, no?

At any rate, I appreciate you looking into this.

- Mark


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #4787: Hardlink (ln) causes startup failure with bizarre "timezone_abbreviations" error

2009-05-02 Thread Mark
On Sat, 02 May 2009 14:47:48 GMT, Tom Lane wrote

> Mark  writes:

> > I understand the rationale for relocatable packages. So,
> > I guess hardlinks are out. But, barring hardlinks,
> > perhaps, in the existence of a symlink, a simple 'readlink'
> > function could be done to auto-correct PostgreSQL's
> > base-location? Ala:
> 
> That's exactly what it already does, and why it would've worked
> if you'd used symlinks not hardlinks.

Interesting. Yet, as I reported earlier, whilst a symlink does seem to start 
the server, pg_ctl takes a long time to do so, and then report: "could not 
start server" anyway. But it actually *does* get started. So I figured maybe 
something was not entirely right with the symlink, either.

- Mark


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Bug: 8.0.0b5 Win Server Installer

2004-11-30 Thread Mark
Test platform = Win 2000 plus all service packs, hotfixes.
No previous PostgreSQL ever installed on the machine before.

8.0.0 beta 5 installation executable from
http://pgfoundry.org/projects/pginstaller

Installed NOT AS A SERVICE, repeat NOT as a WinNT/2K service.
Told installer to put installation in

C:\PostgreSQL\8.0.0-beta5\*

but almost everything went instead to

C:\Program Files\PostgreSQL\8.0.0-beta5\*

The installer script seems "hard wired" to
use "Program Files" folder.  This is a bug.

The only files that were put into the specified
folder were

   C:\PostgreSQL\8.0.0-beta5\bin\ecpg.exe
   C:\PostgreSQL\8.0.0-beta5\include\*.h  (C header files)
   C:\PostgreSQL\8.0.0-beta5\lib\(*.dll | *.a) (dev stuff)

While everything else went into

   C:\Program Files\PostgreSQL\8.0.0-beta5\*

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[BUGS] BUG #5851: ROHS (read only hot standby) needs to be restarted manually in somecases.

2011-01-26 Thread Mark

The following bug has been logged online:

Bug reference:  5851
Logged by:  Mark
Email address:  dvlh...@gmail.com
PostgreSQL version: 9.0.2 x86_64
Operating system:   CentOS release 5.5 (Final) |  2.6.18-194.17.1.el5 #1 SMP
X86_64
Description:ROHS (read only hot standby) needs to be restarted
manually in somecases.
Details: 

getting a break down in streaming rep. my current work around is to restart
the PG instance on the ROHS. doesn't seem to affect the master any. doesn't
require a re-rsync of the base to get replication going again. has happened
with 9.0.2 twice now in a month.  



2011-01-26 08:35:42 MST :: (postgres@10.80.2.89) LOG:  could not receive
data
from client: Connection reset by peer
2011-01-26 08:35:42 MST :: (postgres@10.80.2.89) LOG:  unexpected EOF on
standby connection

this was all I have in the master's log with the level set to debug 1, I
have reset it to debug 5 and will just wait till it dies again and hopefully
get a better idea of what is going on. nothing is being logged to the
standby. I can't find anything else to grab that shows this break down in
streaming rep that won't start back up. 

This is a somewhat *long* distance replication over a 100mbit metro line. we
have had routing issues in the past and see replication fall behind but once
connectivity is restored we see it catch up, without a restart of the
standby. 

probably only ships a few gig of changes a day. 

these are production machines so I can't do too much playing around to try
and induce "issues"


 PostgreSQL 9.0.2 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
4.1.2 20080704 (Red Hat 4.1.2-48), 64-bit
(1 row)




is this a known issue ? I didn't see anything when I have the mailing list
archive a quick glance search that looked like this.

is there somewhere else I should be looking for more details into why this
is happening ?


I can post the configs if you all want them but nothing special is happening
w/ regards to them. 

thank you,

Mark

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5851: ROHS (read only hot standby) needs to be restarted manually in somecases.

2011-01-28 Thread mark
When showing the setting on the slave or master all tcp_keepalive settings
(idle, interval and count) are showing 0;

The config file shows interval and count commented out, but idle in the
config file is set to 2100. 

Possible that "show tcp_keepalive_idle;" isn't reporting accurately ? (or a
value that high isn't be accepted?)

I have reloaded configs and still seeing 0's



I assume you would suggest I turn that number down... a lot. 



..: Mark




> -Original Message-
> From: Robert Haas [mailto:robertmh...@gmail.com]
> Sent: Friday, January 28, 2011 6:48 AM
> To: Mark
> Cc: pgsql-bugs@postgresql.org
> Subject: Re: [BUGS] BUG #5851: ROHS (read only hot standby) needs to be
> restarted manually in somecases.
> 
> On Wed, Jan 26, 2011 at 8:24 PM, Mark  wrote:
> > getting a break down in streaming rep. my current work around is to
> restart
> > the PG instance on the ROHS. doesn't seem to affect the master any.
> doesn't
> > require a re-rsync of the base to get replication going again. has
> happened
> > with 9.0.2 twice now in a month.
> >
> >
> >
> > 2011-01-26 08:35:42 MST :: (postgres@10.80.2.89) LOG:  could not
> receive
> > data
> > from client: Connection reset by peer
> > 2011-01-26 08:35:42 MST :: (postgres@10.80.2.89) LOG:  unexpected EOF
> on
> > standby connection
> >
> > this was all I have in the master's log with the level set to debug
> 1, I
> > have reset it to debug 5 and will just wait till it dies again and
> hopefully
> > get a better idea of what is going on. nothing is being logged to the
> > standby.
> 
> Maybe a break in network connectivity is leading the master to think
> that the slave is dead, while the slave still thinks it's connected.
> You might need to adjust the TCP keepalive parameters the slave uses
> to connect to the master.
> 
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5851: ROHS (read only hot standby) needs to be restarted manually in somecases.

2011-01-30 Thread mark


> -Original Message-
> From: Robert Haas [mailto:robertmh...@gmail.com]
> Sent: Sunday, January 30, 2011 12:19 PM
> To: mark
> Cc: pgsql-bugs@postgresql.org
> Subject: Re: [BUGS] BUG #5851: ROHS (read only hot standby) needs to be
> restarted manually in somecases.
> 
> On Fri, Jan 28, 2011 at 1:03 PM, mark  wrote:
> > When showing the setting on the slave or master all tcp_keepalive
> settings
> > (idle, interval and count) are showing 0;
> >
> > The config file shows interval and count commented out, but idle in
> the
> > config file is set to 2100.
> >
> > Possible that "show tcp_keepalive_idle;" isn't reporting accurately ?
> (or a
> > value that high isn't be accepted?)
> >
> > I have reloaded configs and still seeing 0's
> >
> >
> >
> > I assume you would suggest I turn that number down... a lot.
> 
> Yeah, the defaults are way too long for our purposes.  The way to get
> this set correctly, I think, is to set it in the primary_conninfo
> stream on the slave.  You end up with something like this:
> 
> primary_conninfo='host=blahblah user=bob keepalives_idle=XX
> keepalives_interval=XX keepalives_count=XX'
> 
Thanks I will try this on Monday and will report back if it fixes the
problem. (however since I can't reproduce the issue on demand it might be a
waiting game. Might not know for a month or so tho)

-Mark


> I'm of the opinion that we really need an application-level keepalive
> here, but the above is certainly a lot better than nothing.
> 
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5851: ROHS (read only hot standby) needs to be restarted manually in somecases.

2011-02-08 Thread mark
On Sun, Jan 30, 2011 at 12:45 PM, mark  wrote:
>
>
>> -Original Message-
>> From: Robert Haas [mailto:robertmh...@gmail.com]
>> Sent: Sunday, January 30, 2011 12:19 PM
>> To: mark
>> Cc: pgsql-bugs@postgresql.org
>> Subject: Re: [BUGS] BUG #5851: ROHS (read only hot standby) needs to be
>> restarted manually in somecases.
>>
>> On Fri, Jan 28, 2011 at 1:03 PM, mark  wrote:
>> > When showing the setting on the slave or master all tcp_keepalive
>> settings
>> > (idle, interval and count) are showing 0;
>> >
>> > The config file shows interval and count commented out, but idle in
>> the
>> > config file is set to 2100.
>> >
>> > Possible that "show tcp_keepalive_idle;" isn't reporting accurately ?
>> (or a
>> > value that high isn't be accepted?)
>> >
>> > I have reloaded configs and still seeing 0's
>> >
>> >
>> >
>> > I assume you would suggest I turn that number down... a lot.
>>
>> Yeah, the defaults are way too long for our purposes.  The way to get
>> this set correctly, I think, is to set it in the primary_conninfo
>> stream on the slave.  You end up with something like this:
>>
>> primary_conninfo='host=blahblah user=bob keepalives_idle=XX
>> keepalives_interval=XX keepalives_count=XX'
>>
> Thanks I will try this on Monday and will report back if it fixes the
> problem. (however since I can't reproduce the issue on demand it might be a
> waiting game. Might not know for a month or so tho)
>
> -Mark
>
>
>> I'm of the opinion that we really need an application-level keepalive
>> here, but the above is certainly a lot better than nothing.

my streaming replication woes continue.


I made those changes  in the recovery.conf file but I am still having
streaming replication stay broken after any sort of network
interruption until someone manaully comes along and fixes things by
restarting the standby or if it's been too long resynchronizing the
base.

I think it's a network interruption that is triggering the break down,
but I don't have anything to prove it.

wal_keep_segments are set to 250, which was supposed to give us a few
hours to fix the issue but it seems we blew through that many last
night and such when someone got around to fixing it the standby was
too far behind.


my #1 problem with this right now is I can't seem to reproduce on
demand with virtual machines in our development area.

this is the recovery.conf file, see any problems with it? maybe I
didn't do some syntax right right ?

[postgres@ data9.0]$ cat recovery.conf
standby_mode = 'on'
primary_conninfo = 'host= port=5432 user=postgres
keepalives_idle=30 keepalives_interval=30 keepalives_count=30'



thanks
..: Mark

p.s. looking forward to 9.1 where a standby can be started with
streaming from scratch. that sounds nice.

>>
>> --
>> Robert Haas
>> EnterpriseDB: http://www.enterprisedb.com
>> The Enterprise PostgreSQL Company
>
>

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5851: ROHS (read only hot standby) needs to be restarted manually in somecases.

2011-02-08 Thread mark

> -Original Message-
> From: Fujii Masao [mailto:masao.fu...@gmail.com]
> Sent: Tuesday, February 08, 2011 4:00 PM
> To: mark
> Cc: Robert Haas; pgsql-bugs@postgresql.org
> Subject: Re: [BUGS] BUG #5851: ROHS (read only hot standby) needs to be
> restarted manually in somecases.
> 
> On Wed, Feb 9, 2011 at 6:36 AM, mark  wrote:
> > this is the recovery.conf file, see any problems with it? maybe I
> > didn't do some syntax right right ?
> >
> > [postgres@ data9.0]$ cat recovery.conf
> > standby_mode = 'on'
> > primary_conninfo = 'host= port=5432 user=postgres
> > keepalives_idle=30 keepalives_interval=30 keepalives_count=30'
> 
> This setting would lead TCP keepalive to take about 930 seconds
> (= 30 + 30 * 30) to detect the network outage. If you want to stop
> replication as soon as the outage happens, you need to decrease
> the keepalive setting values.

What numbers would you suggest? I have been guessing and probably doing a
very poor job of it. 

I am turning knobs and not getting any meaningful changes with respect to in
my problem. So either I am not turning them correctly, or they are not the
right knobs for my problem.

Trying to fix my own ignorance here. (should I move this off the bugs list,
since maybe it's not a bug?)

The settings have been unspecified in the recovery file,  it's been
specified in the recovery file, and I have tried the following in the
recovery file:

(~two weeks and it died)
keepalives_idle=0
keepalives_interval=0
keepalives_count=0

(~two weeks and it dies)
keepalives_idle=30
keepalives_interval=30
keepalives_count=30

(this didn't work either, don't recall how long this lasted, maybe a month)

keepalives_idle=2100
keepalives_interval=0
keepalives_count=0






Background is basically this: trying to do streaming replication over a WAN,
probably ship about 5GB of changes per day, hardware on both ends can easily
keep up with that. Running over a shared metro line and have about 3-5MBytes
per second depending on the time of day that I can count on. I have wal_keep
segments at 250 (I don't care about the disk overhead for this, since I
wanted to not have to use wal archiving). The link is being severed more
often than usually lately while some network changes are being made so while
I would expect that improve in the future this isn't exactly the most
reliable connection. so getting whatever as right as I can is of value to
me. 

Typically I see the streaming replication break down for good completely a
few hours after something that causes a interruption in networking. Nagios
notifications lag some but not hours and has to go through a few people
before I find out about it. When checking the nagios pages on their logs I
don't see pages about the distance between the master and the standby
getting bigger during this time, and then once I see the first unexpected
EOF then the distance between the master and standby gets further and
further until it gets fixed or we have to re-sync the whole base over. 


Again I can't seem to duplicate this problem on demand with virtual
machines, I startup a master and standby, setup streaming rep, kickoff a
multi hour or day pg bench run and start messing with networking. Every time
I try and duplicate this synthetically the standby picks right back where it
left off and catches back up.

I am at a loss so I do appreciate everyone's help. 

Thanks in advance

-Mark

> 
> Regards,
> 
> --
> Fujii Masao
> NIPPON TELEGRAPH AND TELEPHONE CORPORATION
> NTT Open Source Software Center


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5851: ROHS (read only hot standby) needs to be restarted manually in somecases.

2011-03-03 Thread mark


> -Original Message-
> From: Robert Haas [mailto:robertmh...@gmail.com]
> Sent: Thursday, March 03, 2011 9:04 AM
> To: mark
> Cc: Fujii Masao; pgsql-bugs@postgresql.org
> Subject: Re: [BUGS] BUG #5851: ROHS (read only hot standby) needs to be
> restarted manually in somecases.
> 
> On Tue, Feb 8, 2011 at 7:23 PM, mark  wrote:
> > (~two weeks and it dies)
> > keepalives_idle=30
> > keepalives_interval=30
> > keepalives_count=30
> 
> Maybe something like this:
> 
> keepalives_idle=60
> keepalives_interval=5
> keepalives_count=10
> 
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company


Thank you !


Things have been more stable with regards to our WAN links. I am sure that
won't last for very long and expect to find out sooner than later if these
changes mitigate the issue I had been seeing. 


Again, thank you

~Mark


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #6029: packaged installer fails to configure ldap

2011-05-17 Thread mark

The following bug has been logged online:

Bug reference:  6029
Logged by:  mark
Email address:  m...@remidata.com
PostgreSQL version: 9.0.2
Operating system:   RHEL 6.0 x86_64
Description:packaged installer fails to configure ldap
Details: 

The source code installation can be configured to work with LDAP, but the
one-click installer cannot by default.

The one-click installer fails to detect openldap and does not attempt to
configure or link with openldap.  We first noticed pg_hba.conf disallowed
specifying ldap authentication.  We verified openldap and openldap-devel are
installed and up-to-date.  We then verified using "pg_configure --configure"
and "ldd postgres" that the PGHOME was not built with ldap support.  For
some reason the one-click installer did not detect openldap was installed
and so did not include it in the configure options.

I have tested this on RHEL 5.5 and RHEL 6.0, both x86_64.  On all servers we
installed openldap and openldap-devel for x86_64 prior to installing
postgres.  On the RHEL 6.0 server we additionally have ldap installed via
Centify.  We can use ldapsearch to connect to the ldap server and query it,
so we know our RHEL servers are functional in that respect.

To make the one-click installer make use of the openldap software we had to
install the i686 edition of it (yum install openldap.i686).  This "trick" is
not required when installing via source code.

We wiped and rebuilt the servers multiple times to verify this behavior.

To install postgresql 9.0 from source we first installed openldap and
openldap-devel (both were x86_64 only - we made sure the i686 was not
installed), and then we ran "./configure --with-ldap" followed by make &&
make install.  The postgres executable appears to support ldap (verified
using "pg_configure --configure" which shows the --with-ldap option, and
"ldd postgres" which shows /usr/lib64/libldap-2.3.so.0).


Server Specs:
--
1. The RHEL 5.5 server is a vm with 4 cores and 8 GB RAM running kernel
2.6.18-194.el5 #1 SMP x86_64 GNU/Linux.

2. The RHEL 6.0 server is a HP DL580G7 with 32 cores and 256 GB RAM running
kernel 2.6.32-71.el6.x86_64 #1 SMP GNU/Linux.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] pgbench -T isn't a hard cutoff.

2011-08-26 Thread mark
Using the -T flag in pgbench I noticed that -T is a just a effort rather
than a hard cut off. 

With a (misbehaving) pooler and a "large" number of clients+jobs it's
possible to have the pgbench run extend by several seconds or even minutes
past the allotted time by -T. (or hang indefinitely if the transactions
never complete due to pooler issues.) 

Expected behavior would be -T would mean a hard cut off. 


Thoughts ?

-Mark


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] question/suggestion Message-id: <493823b5.1030...@hogranch.com>

2008-12-06 Thread Mark Kirkwood

John R Pierce wrote:

chris wood wrote:
At a detailed level (which is NOT the direction I want this thread to 
go) I do not agree with your statement that my proposal has no “hope 
of ACID compliance or transactional integrity”. When the “slices” are 
stored back to the cloud, this is the equivalent of a commit and the 
integrity thereof is as good as what ever the underlying technology 
is. Is the concurrency as good as native Postgres? Of course not. Is 
the commit/rollback flexibility as good as native Postgres? Again no. 
But what’s the alternative? Watch cloud computing take off leaving 
Postgres with the reputation of “great database software in 
yesterday’s era of monolithic servers”?


even something as simple as a SERIAL sequence would be a nightmare in 
a distributed cloud environment without a complex centralized 
arbitrer. the same goes for most any other sort of update/query that 
depends on consistency of data.


How do you reconcile a bank account when the money has been 
simultaneously withdrawn from several ATMs at different locations at 
the same time? "Please, sir, give us our money back?" ? I don't think 
the banks would be happy with that implementation.


If the data is partitioned across the cloud ('one version of the 
truth'), things like JOINs are very very difficult to implement 
efficiently. take away JOINs and you might as well be doing simple 
ISAM like we did back in the 70s before Codd and his Relational 
Database concepts upon which SQL is based.


no, IMHO, the cloud people are better off inventing their own data 
models and their own proprietary query languages suited to the 
architecture. maybe SQL and its concepts of 'one version of the truth' 
and 'data integrity' are quaint relics of another age, so be it.





Objecting to an idea because it is difficult to implement is not 
necessarily a clincher - there are projects trying to adapt Postgres to 
more cloud-like capabilities (e.g Greenplum, Netezza) - neither of these 
are open source however. There is also Pgcluster, however I'm not sure 
that counts as cloud-like in its architecture...


regards

Mark


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #4584: PostgreSQL service doesn't start

2008-12-16 Thread Mark Muzenhardt

The following bug has been logged online:

Bug reference:  4584
Logged by:  Mark Muzenhardt
Email address:  mark.muzenha...@arcor.de
PostgreSQL version: 8.3.5-2
Operating system:   Windows XP professional
Description:PostgreSQL service doesn't start
Details: 

I have installed the one-click-installer of PostgreSQL on a virtual machine
with a new WindowsXP professional installation (nothing but PostgreSQL is
installed). I did not get access to the database, so I began to search why.
I found, that the PostgreSQL service was not started and tried to do this by
myself. The service is refusing to run, no matter what I've tried. A
postgres-user was created by the installer and I installed PostgreSQL with
the Administrator account, so this issued can not be the problem.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #4612: lc_numeric setting ignored

2009-01-12 Thread Mark Hayen

The following bug has been logged online:

Bug reference:  4612
Logged by:  Mark Hayen
Email address:  mark.ha...@atosorigin.com
PostgreSQL version: 8.3.5
Operating system:   Ubuntu 8.10
Description:lc_numeric setting ignored
Details: 

When setting the LC_NUMERIC to nl_NL.UTF-8 systemvariable at both the OS
level and the database itself (postgresql.conf), the database refuses to
accept a decimal comma. 

psql output:

dimpact=# show lc_numeric;
 lc_numeric  
-
 nl_NL.UTF-8
(1 row)

dimpact=# select totaalbedragaanslag from import.aanslag;
 totaalbedragaanslag 
-
   33.33
(1 row)

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #4612: lc_numeric setting ignored

2009-01-13 Thread Mark Hayen
He Bruce,

Thank you for replying.
Does this mean that there is no way to do a "\copy from" using CSV files
containing decimals with
a comma?

Mark Hayen


ÿþDit bericht is vertrouwelijk en kan 
geheime informatie bevatten enkel

bestemd voor de geadresseerde. Indien 
dit bericht niet voor u is bestemd,

verzoeken wij u dit onmiddellijk aan 
ons te melden en het bericht te

vernietigen.

Aangezien de integriteit van het 
bericht niet veilig gesteld is middels

verzending via internet, kan Atos 
Origin niet aansprakelijk worden 
gehouden

voor de inhoud daarvan.

Hoewel wij ons inspannen een virusvrij 
netwerk te hanteren, geven

wij geen enkele garantie dat dit 
bericht virusvrij is, noch aanvaarden 
wij

enige aansprakelijkheid voor de 
mogelijke aanwezigheid van een virus in 
dit

bericht.

 

Op al onze rechtsverhoudingen, 
aanbiedingen en overeenkomsten 
waaronder

Atos Origin goederen en/of diensten 
levert zijn met uitsluiting van alle

andere voorwaarden de 
Leveringsvoorwaarden van Atos Origin 
van toepassing.

Deze worden u op aanvraag direct 
kosteloos toegezonden.

 

This e-mail and the documents attached 
are confidential and intended solely

for the addressee; it may also be 
privileged. If you receive this e-mail

in error, please notify the sender 
immediately and destroy it.

As its integrity cannot be secured on 
the Internet, the Atos Origin group

liability cannot be triggered for the 
message content. Although the

sender endeavours to maintain a 
computer virus-free network, the sender

does not warrant that this transmission 
is virus-free and will not be

liable for any damages resulting from 
any virus transmitted.

 

On all offers and agreements under 
which Atos Origin supplies goods and/or

services of whatever nature, the Terms 
of Delivery from Atos Origin

exclusively apply. 

The Terms of Delivery shall be promptly 
submitted to you on your request.

 

Atos Origin Nederland B.V. / Utrecht

KvK Utrecht 30132762
-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #4787: Hardlink (ln) causes startup failure with bizarre "timezone_abbreviations" error

2009-05-01 Thread Mark Kramer

The following bug has been logged online:

Bug reference:  4787
Logged by:  Mark Kramer
Email address:  r...@asarian-host.net
PostgreSQL version: 8.3.7
Operating system:   FreeBSD 7.1
Description:Hardlink (ln) causes startup failure with bizarre
"timezone_abbreviations" error
Details: 

I have my PostgreSQL installed in /usr/local/PostgreSQL/ (cleaner for
updates, instead of just /usr/local) As a result, I made hard-links like
this,

cd /usr/local/bin/
ln /usr/local/PostgreSQL/bin/pg_ctl pg_ctl

Etc. Seems PostgreSQL can't handle the fact. I try and start the server,
with:

/usr/bin/su -l pgsql -c "exec /usr/local/bin/pg_ctl start -D
/var/db/PostgreSQL -w -s -m fast"

I get this error, though:

May  1 04:40:26 asarian-host postgres[9742]: [6-1] FATAL:  invalid value for
parameter "timezone_abbreviations": "Default"

Which is a silly error, because it's rather untrue, and it's rather strange,
honestly, for PostgreSQL to want to be started from a hardcoded location.
Starting up the usual way, with:

/usr/bin/su -l pgsql -c "exec /usr/local/PostgreSQL/bin/pg_ctl start -D
/var/db/PostgreSQL -w -s -m fast"

Works just fine. So, at the very least, change the error message to
something that actually makes sense, like: "FATAL: Binary started from
location other than the one used at compile-time;" or something to that
affect. But better still, there's no need for PostgreSQL to have this hard
location requirement: its lib paths has been set (at boot) with ldconfig, so
it should find whatever libs it need, regardless from where the binary
resides that I use to start the server.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Cursor with hold emits the same row more than once across commits in 8.3.7

2009-06-08 Thread Mark Kirkwood
Short Desc: Cursor with hold emits the same row more than once across 
commits in 8.3.7

Os : Debian Etch amd64 / Ubuntu Jaunty amd64
Pg : 8.3.7
Build options: Official package and also compiled from source with 
--enable-integer-datetimes

Detailed Desc:

A construction of the form

DECLARE cur CURSOR WITH HOLD FOR SELECT * FROM obj

loop
   FETCH 1000 FROM cur
   process 'em
   COMMIT

results in some of the same rows being emitted more than once, altho the 
final rowcount is correct (i.e some rows end up being never seen).


Originally discovered using a perl DBI program, and we wondered if the 
version of DBD::Pg might be an issue, so a c library program was written 
to test this - and it exhibits the problem too (see attached for schema 
and program). The table rows are reasonably wide:



select attname,n_distinct,avg_width from pg_stats where tablename='obj';
  attname   | n_distinct | avg_width
-++---
obj_id  | -1 | 4
obj_link_id |  5 | 4
obj_fil |  13035 |  1188

which may be a factor(tuplestore issues?)... The table is reasonably 
sizable (1000 rows). I can attach the generation program for this 
dataset if required.


regards

Mark



cursor-bug.tar.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [HACKERS] [BUGS] Cursor with hold emits the same row more than once across commits in 8.3.7

2009-06-10 Thread Mark Kirkwood

Tom Lane wrote:

Given that RC freeze is nearly upon us for 8.4, and that we need a
reasonably non-invasive fix for 8.3 anyway, I propose that for now
we just deal with the syncscan issue by tweaking heap_rescan so that
rs_startblock doesn't get changed.  It looks like that's about a
three-line patch.  The question of how cursors should behave with
respect to volatile functions can be documented and left for another
time.
  


Sounds like a good approach.

Mark

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5031: DATE_TRUNC returns the wrong value when specifying MONTH

2009-09-02 Thread Mark Douglas

The following bug has been logged online:

Bug reference:  5031
Logged by:  Mark Douglas
Email address:  m...@steelhousemedia.com
PostgreSQL version: 8.4.0
Operating system:   Ubunto Linux
Description:DATE_TRUNC returns the wrong value when specifying MONTH
Details: 

The following use of DATE_TRUNC returns the wrong value. I called the
function on 2009-09-02. It should return '2009-09-01 00:00:00' for the
following usage:

SELECT DATE_TRUNC('MONTH', CURRENT_DATE);

It instead returns '2009-08-31 17:00:00.

Casting CURRENT_DATE to a TIMESTAMP causes it to return the correct value
but that shouldn't be required. Cast example:

SELECT DATE_TRUNC('MONTH', CAST(CURRENT_DATE AS TIMESTAMP));

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5031: DATE_TRUNC returns the wrong value when specifying MONTH

2009-09-02 Thread Mark Douglas
I have my timezone set to GMT so there really shouldn't be any time zone 
adjustments.

Mark


On 9/2/09 10:01 PM, "Tom Lane"  wrote:

"Mark Douglas"  writes:
> The following use of DATE_TRUNC returns the wrong value. I called the
> function on 2009-09-02. It should return '2009-09-01 00:00:00' for the
> following usage:

> SELECT DATE_TRUNC('MONTH', CURRENT_DATE);

> It instead returns '2009-08-31 17:00:00.

Really?  What timezone setting are you using?  I get

postgres=# SELECT DATE_TRUNC('MONTH', CURRENT_DATE);
   date_trunc

 2009-09-01 00:00:00-04
(1 row)

with timezone set to 'America/New_York' or equivalent.
This test might also be instructive:

postgres=# select CURRENT_DATE::timestamptz;
  timestamptz

 2009-09-03 00:00:00-04
(1 row)


regards, tom lane



Re: [BUGS] BUG #5031: DATE_TRUNC returns the wrong value when specifying MONTH

2009-09-03 Thread Mark Douglas
You're correct. When I run this from psql it returns the correct result. When I 
run it from DBVisualizer, which I normally use, it adjust the result to my 
local time zone. Thanks for looking into it. Sorry about bugging you with that.

Thanks,

Mark


On 9/2/09 10:24 PM, "Tom Lane"  wrote:

Mark Douglas  writes:
> I have my timezone set to GMT so there really shouldn't be any time zone 
> adjustments.

Okay ...

postgres=# set timezone = GMT;
SET
postgres=# SELECT DATE_TRUNC('MONTH', CURRENT_DATE);
   date_trunc

 2009-09-01 00:00:00+00
(1 row)

I suspect there's something you're not telling us, like you're
using a client-side library that is doing timezone adjustments
behind your back.

regards, tom lane



Re: [BUGS] BUG #5054: PDO -> Query returns "" from Boolean type field, if it has false value.

2009-09-14 Thread Mark Kirkwood

Tom Lane wrote:

"Yujin"  writes:
  

When i get query from table with bolean type fields, that have false value ,
function PDO -> fetch  return that fields with not "0" value , but empty
string.



Are you sure the field is actually false, and not null?

If so, this is a PDO bug, not a Postgres bug.

regards, tom lane

  

Does seem to be a PDO bug or some sort:

Trying out some code with Php 5.3.1-dev:

$sql = "SELECT false";
$stmt = $dbh->query($sql);
$result = $stmt->fetch(PDO::FETCH_NUM);
print("  " . $result[0] . "\n");

reproduces what Yujin is seeing, whereas replacing $sql with:

$sql = "SELECT false::int4";

gives a 0 in the result array. I guess it must be something funny with 
how PDO represents the bool type...(will have a look at the PDI code). 
But this needs to be raised on bugs.php.net.


Cheers

Mark

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5054: PDO -> Query returns "" from Boolean type field, if it has false value.

2009-09-14 Thread Mark Kirkwood

Mark Kirkwood wrote:
I guess it must be something funny with how PDO represents the bool 
type...(will have a look at the PDO code). But this needs to be raised 
on bugs.php.net.





FYI - a related bug is : http://bugs.php.net/bug.php?id=33876


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5054: PDO -> Query returns "" from Boolean type field, if it has false value.

2009-09-15 Thread Mark Kirkwood

I wrote:



Trying out some code with Php 5.3.1-dev:

$sql = "SELECT false";
$stmt = $dbh->query($sql);
$result = $stmt->fetch(PDO::FETCH_NUM);
print("  " . $result[0] . "\n");

reproduces what Yujin is seeing...


After a bit of digging through the PDO code, I see what is happening. 
the ->fetch operation is returning a Php boolean correctly from PDO (you 
can use var_dump($result[0]) to check this), but when you use print as 
above, Php casts the boolean to string - and for whatever reason Php 
reckons turning (boolean)false into (string)"" is the way to go. So to 
get a sensible result you need to do something like:


$result = $stmt->fetch(PDO::FETCH_NUM);
print("  " . (integer) $result[0] . "\n");

i.e: explicit cast on the result value.

This is confusing and seemingly not consistent - for instance the Mysql 
version of this example returns a string "0" from PDO, so gives a 0 for 
false in a more expected/intuitive way...


regards

Mark

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5096: Error installing edb_apachephp.bin

2009-10-04 Thread Mark Kirkwood

Federico wrote:

The following bug has been logged online:

Bug reference:  5096
Logged by:  Federico
Email address:  federicoaagui...@gmail.com
PostgreSQL version: 8.4
Operating system:   OpenSuSE 11.1
Description:Error installing edb_apachephp.bin
Details: 


Hello, when i'm installing edb_apachephp.bin (./edb_apachephp.bin) it show
an error: Error running getenforce  : /bin/sh: getenforce: command not
found.

Please, help me.


  


This looks like an Enterprisedb installation problem - I suspect you 
will get more help raising it with them (hopefully there is a link off 
http://www.enterprisedb.com for support/bugs).


The error you are seeing is because your Opensuse install does not have, 
or cannot find 'gatenforce' - an SELunix command. Your operating system 
may be not configured correctly for Enterprisedb, or else your PATH does 
not include where gatenforce lives.


regards

Mark




--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5150: math bug

2009-11-01 Thread Mark Kirkwood

David Fetter wrote:

On Fri, Oct 30, 2009 at 08:51:57PM -0700, John R Pierce wrote:
  

Tom Lane wrote:


There is special-purpose software out there that can compute
exactly with rational numbers, but you aren't likely to find it
embedded in any general-purpose tools like databases --- the
use-case just isn't wide enough.  One reason why not is that it'll
still fall down on irrational numbers.
  
  



1/3 is a rational number.  however,  it is a repeating fraction when
expressed in decimal.






The set of algebraic numbers, of which rational numbers are a proper
subset, is countable and hence has Lebesgue measure zero on the real
line.
 ;)

  


LOL - fortunately (going by the bug) he is not trying to compute a 
measure (i.e integrate) from a set of 'em.


Mark

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] statement_timeout is not cancelling query

2009-12-14 Thread Mark Williamson
I have a few things to report so I'm not sure if one email is good or
several but here goes.

We are using Postgresql 8.3.8

We were having a blocking query problem that should have been fixed by
statement_timeout = 9 however this seems to have had zero effect.

The query we have was like so:

update articles set views=views+1 where id=7223

Thats it.  Fairly simple right?  Well, we also had a trigger function that
updates a full text index on that record whenever any value is updated.  We
have since changed this function to only update the gist index for inserts
or updates when one of the indexed columns is updated.  However, let's stick
with the original for a moment.  There should have really been no problem
updating the GIST index for a single row in an insert/update trigger.

So what happened is, the above update never completed and the Postgresql
service consumed all available memory.  We had to forcefully reboot the
machine, we turned on track activity, and watch it do it again and again.
Luckily we were able to kill the process with the offending query before
losing the machine.

The postgresql configuration has a max of 255 connections.  The machine has
16 gigabytes of RAM and 2 quad core xeons.  We have several instances of
Postgresql running on different ports.  Our reason for doing this was to
prevent one customer's database 'instance' from impacting another customer.
A couple of years ago we had a run away query that brought the whole system
down.  So I implemented this separate instance concept and it has been
purring along great ever since, until now.

So we contacted a PG expert who was able to determine we had a corrupt full
text index and recommended rebuilding it and fixing the trigger function.
Once we rebuilt the index things worked (or are working) so far.

So we have a couple of questions:

Why is it that statement_timeout was ignored and the update statement was
allowed to run for excessive time?
Why does Postgresql NOT have a maximum memory allowed setting?  We want to
allocate resources efficiently and cannot allow one customer to impact
others.

That's it for now.

Hope someone can provide helpful answers.

Thanks,
Mark W.


[BUGS] Optimal platform for pg?

2009-12-15 Thread Mark Williamson
What is the most ideal/optimal platform for postgresql?  Linux  
(distro?), freebsd, windows, etc.


consider memory management, file system performance, threading model  
etc.


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5244: Attempting to rollback to a savepoint after receiving an error with state 55000 the process hangs

2009-12-16 Thread Mark Kirkwood

Philip Graham wrote:

The following bug has been logged online:

Bug reference:  5244
Logged by:  Philip Graham
Email address:  phi...@lightbox.org
PostgreSQL version: 8.3.8
Operating system:   Linux
Description:Attempting to rollback to a savepoint after receiving an
error with state 55000 the process hangs
Details: 


This may be a PHP so please excure me if it is.

setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

echo 'Creating test objects'."\n";
$pdo->query('CREATE SEQUENCE test_seq');

echo 'Setup complete'."\n";
$pdo->beginTransaction();

try {
echo 'Setting savepoint'."\n";
$pdo->query('SAVEPOINT pre_id_fetch');
echo 'Fetching value'."\n";
$stmt = $pdo->query('SELECT currval(\'test_seq\');');
$curId = $stmt->fetchColumn();
echo 'Releasing savepoint'."\n";
$pdo->query('RELEASE SAVEPOINT pre_id_fetch');
} catch (PDOException $e) {
echo 'Rolling back'."\n";
$pdo->query('ROLLBACK TO pre_id_fetch');
$curId = 0;
}

echo 'Cur Id: ',$curId,"\n";


Running this code it hangs after echoing 'Rolling back', but only hangs
every other execution (assuming the sequence was deleted first).

  


I think you need to be using $pdo->exec instead of $pdo->query for 
everything *except* the SELECT operation.  The query method is really 
only intended for statements returning rows. Making the indicated 
changes stops the hang for me (Php 5.3.2)


regards

Mark


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] remove

2009-12-30 Thread Mark Williamson



Re: [BUGS] BUG #5268: PQgetvalue incorrectly returns 0

2010-01-07 Thread Mark Kirkwood

Mike Landis wrote:
Pick a database and table that exists, configure the string 
cconstants, compile and run the attached cpp, get 0 instead of 1 (that 
you get in pgAdmin...


Where's can I download the libpq source?  Maybe I can find and/or fix 
the problem myself.


Your program works fine for me (apart from minor datatype changes to 
port to Linux). I used user=postgres and database=regression and 
pgtable='tenk1' (schema from the regression test suite). As Tom 
mentioned, your user might not have access to the table you are using - 
try using the superuser account - typically 'postgres' to eliminate this 
possibility.


With respect to the libpq source, it is in the source tarball from the 
Postgresql website (directory src/interfaces/libpq ).


regards

Mark


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5268: PQgetvalue incorrectly returns 0 (corrected)

2010-01-07 Thread Mark Kirkwood

(I forgot to cc -bugs...)

Mike Landis wrote:


Two things strike me as odd about that...
1) What's the logic behind the owner of a table not automatically 
getting a readonly privilege like SELECT?


Owner always has select on a table they have created.

2) I think it would be more logical to refuse to return a NULL result 
from PQexec() than to return a result with the proper number of tuples 
and columns, but inaccurate contents.


COUNT always returns a number (never null). Recall you are select from
the information schema table.



It's also strange that Linux and Vista would produce different answers.


I suspect they do not. Its all in the permissions.

Cheers

Mark


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5268: PQgetvalue incorrectly returns 0

2010-01-07 Thread Mark Kirkwood

Mike Landis wrote:

At 09:09 PM 1/7/2010, you wrote:



I suspect they do not. Its all in the permissions.


There's no user account control enabled on this Vista machine, 
therefore effectively wide open, hence different platform behavior or 
at least a difference between the behavior in pgAdmin and client program.




The lack of os level permissions is not relevant to this issue - I was 
referring to database level users and their permissions on tables. It 
does look like you are somehow running your c program as a different 
(db) user from who you are using in Pgadmin. As Robert suggested, try 
doing 'SELECT user' in both.


Also note that Pgadmin user PQexec and PQgetValue...

Cheers

Mark


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Foreign key constaint can be broken

2010-01-26 Thread Mark Kazemier
Dear,

I found a way to break a foreign key constraint in PostgreSQL. I discussed
this bug on a dutch discussion board and multiple people tried this on
multiple versions.

When I create the following tables:


CREATE TABLE a
(
id SERIAL NOT NULL,
foo CHAR(100) NOT NULL,
PRIMARY KEY(id)
);

CREATE TABLE b
(
id INT NOT NULL,
bar CHAR(20) NOT NULL,
PRIMARY KEY(id),
FOREIGN KEY(id) REFERENCES a(id) ON DELETE CASCADE
);

and add the following rule to table b:

CREATE OR REPLACE RULE delete_b AS ON DELETE
TO b
DO INSTEAD
DELETE FROM a WHERE OLD.id = id;

When you try to delete a row on table b:
DELETE FROM b WHERE id = 1;
The record from table a disappears, but the record in table b is still
there. Of course this is a very stupid construction, but I would expect some
kind of error / warning message instead. Now it is possible to corrupt your
data.

Best regards,
Mark Kazemier


Re: [BUGS] BUG #5312: I NEED HELP

2010-02-04 Thread Mark Kirkwood

Alvaro Herrera wrote:

Daniel J. Baldev escribió:
  

All I want to do is to delete a database, but I don't know how to
actually input the dropdb command and what other stuff I need to
open...can you help? I think my problem will be very simple for
someone who understands this



Are you using pgAdmin?  If so, just right-click on the database and
select "delete/drop".

  
Probably too late to be mentioning this... but Daniel, are you sure that 
the database is not needed by anything?


regards

Mark

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Cluster fails

2000-05-10 Thread Mark Kirkwood

Your name :Mark Kirkwood
Your email address  :[EMAIL PROTECTED]


System Configuration
-
  Architecture (example: Intel Pentium) :Intel Pentuim

  Operating System (example: Linux 2.0.26 ELF)  :Linux 2.2.14-5.0 ELF
(Redhat 6.2)

  PostgreSQL version (example: PostgreSQL-6.5.1):7.0RC4

  Compiler used (example:  gcc 2.8.0)   :egcs-2.91.66


Please enter a FULL description of your problem:

command  "cluster indexname on tablename" fails with text
"ERROR : temp123456abc is an index relation".
tablename and indexname are missing from pg_class, but
tablename still exists in the data directory


Please describe a way to repeat the problem.   Please try to provide a
concise reproducible example, if at all possible:
--

create table testit ( id integer);
create index testit_i on testit( id );
cluster testit_i on testit;

If you know how this problem might be fixed, list the solution below:
-




[BUGS] select cash_out('2'); crashes backend on 7.0.2

2000-12-12 Thread Mark Stosberg


Hello,

 I was just experimenting, trying to see if I could find a function that
would format a numeric value like 'money' with Postgres 7.0.2. Here's
what happened:

##
cascade=> select cash_out(2);
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
We have lost the connection to the backend, so further processing is
impossible.  Terminating.
##

The same thing happened with Postgres 6.5.3. Here's my full version:
PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.96 

I'm sure if what I tried is even valid input, but I'm guessing this is
not a desired result in any case. :) 

Thanks for the great software and good luck with this!

A frequent Postgres user,

  -mark

personal website }  Summersault Website Development
http://mark.stosberg.com/{  http://www.summersault.com/



[BUGS] possible mis-handling of nulls in views in 7.0.2

2001-03-12 Thread Mark Stosberg


Hello,

   I'm running Postgres 7.0.2 and have run into a curious situation. I
got a back a null value in a select on VIEW that is defined as not
allowing that column to be null. Here's a screenshot:

marvel=> \d shipments;
View= shipments
Query   = SELECT "web_data"."shipment_id", "web_data"."order_id",
"web_data"."customer_id", "web_data"."purchase_order_num", "web_data"."actual_ship
_date", "web_data"."pro_num", "sum"("web_data"."qt_ordered") AS
"qt_ordered", "sum"("web_data"."qt_shipped") AS "qt_shipped" FROM
"web_data" WHERE (
"web_data"."shipment_id" NOTNULL) GROUP BY "web_data"."shipment_id",
"web_data"."order_id", "web_data"."customer_id", "web_data"."actual_ship_date",
 "web_data"."pro_num", "web_data"."purchase_order_num";
+--+--+---+
|  Field   |  Type| Length|
+--+--+---+
| shipment_id  | varchar()| 
  32 |
| order_id | varchar()| 
 100 |
| customer_id  | varchar()| 
  10 |
| purchase_order_num   | varchar()| 
 100 |
| actual_ship_date | date | 
   4 |
| pro_num  | varchar()| 
 100 |
| qt_ordered   | float8   | 
   8 |
| qt_shipped   | float8   | 
   8 |
+--+--+---+
marvel=>

### Notice that the shipment_id is NOTNULL
### now watch:

marvel=> select * from shipments where shipment_id is null;
shipment_id|order_id|customer_id|purchase_order_num|actual_ship_date|pro_num|qt_ordered|qt_shipped
---++---+--++---+--+--
   ||   |  ||   
   |  |
(1 row)
#
It returns a row with a null shipment id!

I'm not sure what's happening here-- I tried to simplify this to a
simple case, but I couldn't reproduce the bug. Oddly, this null row
doesn't seem to appear in the table web_data that the view references. I
think it's easy enough to work around, but I'm curious what might be
happening here. 

Thanks,

  -mark

personal website }  Summersault Website Development
http://mark.stosberg.com/{  http://www.summersault.com/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [BUGS] possible mis-handling of nulls in views in 7.0.2

2001-03-12 Thread Mark Stosberg

Tom,

  I tried to build a simple test case and I couldn't reproduce either.
I'm still actively working with that database, though-- If I run into it
again, and can reproduce a condensed case, I'll definitely submit it. It
sounds like views are being improved a good deal in 7.1, so perhaps my
oddity would be repaired anyway. Thanks for checking it out.

   -mark

http://mark.stosberg.com/


Tom Lane wrote:
> 
> Mark Stosberg <[EMAIL PROTECTED]> writes:
> >I'm running Postgres 7.0.2 and have run into a curious situation. I
> > got a back a null value in a select on VIEW that is defined as not
> > allowing that column to be null.
> 
> I think this is an artifact of the curious (not to say broken)
> implementation of views pre-7.1.  However, it's hard to tell for sure
> because I can't reproduce your problem.  Are you sure you are running
> 7.0.2 and not something older?  Can you provide a self-contained
> example?  My test went like this:
> 
> play=> select version();
>  version
> --
>  PostgreSQL 7.0.2 on hppa2.0-hp-hpux10.20, compiled by gcc 2.95.2
> (1 row)
> 
> play=> create table foo (f1 int, f2 int);
> CREATE
> play=> insert into foo values(1,2);
> INSERT 873546 1
> play=> insert into foo values(1,3);
> INSERT 873547 1
> play=> insert into foo values(2,4);
> INSERT 873548 1
> play=> insert into foo values(2,5);
> INSERT 873549 1
> play=> create view v2 as select f1,sum(f2) from foo where f1 notnull group by f1;
> CREATE 873571 1
> play=> select * from v2 ;
>  f1 | sum
> +-
>   1 |   5
>   2 |   9
> (2 rows)
> 
> play=> select * from v2 where f1 isnull;
>  f1 | sum
> +-
> (0 rows)
> 
> regards, tom lane

-- 

http://mark.stosberg.com/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[BUGS] Using nulls with earthdistance operator crashes backend

2001-07-31 Thread Mark Stosberg


Hello!

Here's now to reproduce my bug:
* Start with Postgres 7.1.2
(specifically:  PostgreSQL 7.1 on i386--freebsd4.3, compiled by GCC 2.95.3
)
* Install earthdistance operator from the contrib directory.
* try this:
cascade=> select null <@> '1,1'::point;

## The result I get:
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.

###

I expected simply for "null" to be returned as the result. I can work
around
this by including an extra step to make sure that my data is not null
before
it's passed off a SQL statement like this.

  Thanks!

   -mark

http://mark.stosberg.com/



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [BUGS] Using nulls with earthdistance operator crashes backend

2001-07-31 Thread Mark Stosberg


I'll give it a shot and post the patch or let you know I'm stuck. :) Thanks!

  -mark 

Tom Lane wrote:
> 
> Mark Stosberg <[EMAIL PROTECTED]> writes:
> > * Install earthdistance operator from the contrib directory.
> > * try this:
> > cascade=> select null <@> '1,1'::point;
> 
> > ## The result I get:
> > pqReadData() -- backend closed the channel unexpectedly.
> 
> Probably the earthdistance functions are not NULL-safe and need to be
> marked "isStrict" in CREATE FUNCTION.  Would you be willing to do the
> legwork on working up a patch for that?
> 
> regards, tom lane
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

-- 

http://mark.stosberg.com/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [BUGS] Using nulls with earthdistance operator crashes backend (patch)

2001-08-20 Thread Mark Stosberg

> Tom Lane wrote:
> >
> > Mark Stosberg <[EMAIL PROTECTED]> writes:
> > > * Install earthdistance operator from the contrib directory.
> > > * try this:
> > > cascade=> select null <@> '1,1'::point;
> >
> > > ## The result I get:
> > > pqReadData() -- backend closed the channel unexpectedly.
> >
> > Probably the earthdistance functions are not NULL-safe and need to be
> > marked "isStrict" in CREATE FUNCTION.  Would you be willing to do the
> > legwork on working up a patch for that?

Tom,

Here's a patch using "isstrict":


--- earthdistance.sql.in.orgThu Aug 16 17:08:19 2001
+++ earthdistance.sql.inThu Aug 16 17:09:01 2001
@@ -3,7 +3,8 @@

 DROP FUNCTION geo_distance (point, point);
 CREATE FUNCTION geo_distance (point, point) RETURNS float8
-  AS 'MODULE_PATHNAME' LANGUAGE 'c';
+  AS 'MODULE_PATHNAME' LANGUAGE 'c'
+  WITH (isstrict);

 SELECT geo_distance ('(1,2)'::point, '(3,4)'::point);
#

Now when I run the "crasher" SQL above, I get one empty row back:

sumsault_test=# select null <@> '1,1'::point;
 ?column?
--

(1 row)
#

I look forward to seeing you at the Open Source Database Summit!

  -mark

 . . . . . . . . . . . . . . . . . . . . . . . . . .
   Mark Stosberg  Principal Developer  
   [EMAIL PROTECTED]   Summersault, LLC 
   v: 765-939-9301 ext 223website development  
 . . . . . http://www.summersault.com/ . . . . . . .

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[BUGS] Error Restoring Large Database

2002-01-24 Thread Mark Rae


System Configuration
-
  Architecture   : Dual AMD Athlon MP 1800+

  Operating System   : Linux dpdev02 2.4.4-64GB-SMP #1 SMP

  PostgreSQL version : PostgreSQL-7.2b4 and PostgreSQL-7.2rc1

  Compiler used  : gcc version 2.95.3 20010315 (SuSE)


Please enter a FULL description of your problem:


When trying to restore from a dump of a 9Gb database the restore fails.
(see below for info)
The same dump/restore procedure has worked succesfully for smaller
databases ~1-2Gb

The relevant error message appears to come from
src/backend/storage/freespace/freespace.c
which says

/* Shouldn't get here given the initial if-test for space available */
elog(ERROR, "insert_fsm_page_entry: failed to insert entry!");



The database tables total 9.0Gb with associated indexes taking up
10.0Gb.
There is >25G free disk space, and there is no error in the
system logs.

I have tried other variations on the examples below, but the
restore still fails. Apparently _in the same place_ each time.
I also tried rc1 which behaves in the same way.


Please describe a way to repeat the problem.   Please try to provide a
concise reproducible example, if at all possible:
--

The database is dumped using
$ pg_dump -b -Fc core4 > core4.dump

And then restored using
$ pg_restore --dbname=core4 < core4.dump

Which exits with error code 1 and the following message
pg_restore: [archiver (db)] error returned by PQputline

The logfile contains the errors
ERROR:  copy: line 26501638, insert_fsm_page_entry: failed to insert entry!
FATAL 1:  Socket command type 2 unknown


Retrying using an INSERT rather than COPY dump
$ pg_dump --inserts -b -Fc core4 > core4.dump

gives these errors when trying to restore

pg_restore: [archiver (db)] could not execute query: ERROR:  insert_fsm_page_entry: 
failed to insert entry!

and

ERROR:  insert_fsm_page_entry: failed to insert entry!

in the logfile



-Mark

-- 
Mark Rae   Tel: +44(0)20 7074 4648
Inpharmatica   Fax: +44(0)20 7074 4700
[EMAIL PROTECTED]http://www.inpharmatica.co.uk/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [BUGS] Error Restoring Large Database

2002-01-25 Thread Mark Rae

Tom Lane wrote:
> 
> Mark Rae <[EMAIL PROTECTED]> writes:
> > ERROR:  copy: line 26501638, insert_fsm_page_entry: failed to insert entry!
> 
> Ugh.  I think I see the problem: silly oversight in this routine.
> Please apply the attached patch and see if it fixes the failure.
> 

Yup, that seems to have fixed the problem. 
I have tried it twice to be sure. :-)

I actually hand patched 7.2b4 as the patch was (I assume) against CVS.
I can try the current CVS version if you want to be really sure.

Thanks.

-Mark

-- 
Mark Rae   Tel: +44(0)20 7074 4648
Inpharmatica   Fax: +44(0)20 7074 4700
[EMAIL PROTECTED]http://www.inpharmatica.co.uk/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[BUGS] PostgresSQL 7.1.3 not parsing SELECT portion of SQL correctly

2002-02-01 Thread Mark Jeacocke


POSTGRESQL BUG REPORT TEMPLATE



Your name  : Mark Jeacocke
Your email address : [EMAIL PROTECTED]


System Configuration
-
  Architecture (example: Intel Pentium)   : AMD Athlon XP  1600

  Operating System (example: Linux 2.0.26 ELF)  : RedHat Linux 7.0 Kernel
2.4.2

  PostgreSQL version (example: PostgreSQL-7.1.3):   PostgreSQL-7.1.3

  Compiler used (example:  gcc 2.95.2)  : Downloaded offical RPMS from
Postgres mirror


Please enter a FULL description of your problem:

Seems to be a parsing problem in the select portion of the SQL statement:

select DISTINCT TABLEID, '' AS ALIAS from TABLE; :FAILS
select DISTINCT TABLEID, 'test' AS ALIAS from TABLE; :FAILS
select DISTINCT TABLEID, Null AS ALIAS from TABLE; :FAILS
select DISTINCT TABLEID, trim('') AS ALIAS from TABLE; :SUCCEEDS
select DISTINCT TABLEID, 2 AS ALIAS from TABLE; :SUCCEEDS

I believe all five queries should succeed.


Please describe a way to repeat the problem.   Please try to provide a
concise reproducible example, if at all possible:
--

See above.



If you know how this problem might be fixed, list the solution below:
-






---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [BUGS] postmaster 8.2 eternally hangs in sempaphore lock acquiring

2007-04-02 Thread Mark Shuttleworth
Martin Pitt wrote:
> Hi Tom, hi Mark,
>
> Tom, thank you for having a look into this!
>
> Tom Lane [2007-03-29 13:49 -0400]:
>   
>> Martin Pitt <[EMAIL PROTECTED]> writes:
>> 
>>> https://launchpad.net/bugs/93042 has symbolic gdb backtraces of all
>>> three processes that are involved.
>>>   
>> Are these really all the processes involved?  The createdb process and
>> the autovac process are both waiting for someone else to give up the
>> BtreeVacuumLock, but that is never held for any long period, and it's
>> certainly not held by the guy trying to do InitPostgres.
>> 
>
> There are more processes, unfortunately I don't have backtraces of
> them. I got this from my IRC log:
>
> 15928 ?Ss 0:00 postgres: mark launchpad_ftest_template [local] 
> CREATE DATABASE
> 15956 ?Ss 0:00 postgres: session session_dev [local] idle
> 15957 ?Ss 0:00 postgres: launchpad launchpad_dev [local] idle
> 15958 ?Ss 0:00 postgres: session session_dev [local] idle
> 15969 ?Ss 0:00 postgres: launchpad launchpad_dev [local] idle
> 16014 ?Ss 0:00 postgres: launchpad launchpad_dev [local] idle
> 16273 ?Ss 0:00 postgres: mark launchpad_ftest_template [local] 
> startup waiting
>
>   
>> I believe that the guy trying to do InitPostgres is blocked by the
>> createdb process --- it looks like he's trying to attach to the same 
>> DB being used as a template for the createdb, and as of 8.2 we lock out
>> new entries to a template DB until the copy is complete.
>>
>> It's possible that this is not a deadlock per se, but the aftermath of
>> someone having errored out without releasing the BtreeVacuumLock --- but
>> I don't entirely see how that could happen either, at least not without
>> a core dump scenario.
>>
>> Is there anything in the postmaster log when this happens?  Errors out
>> of _bt_start_vacuum would be particularly interesting...
>> 
>
> I believe Mark's postgres runs with fully verbose logging. Mark, can you 
> please
> have a look?
>   

Yes, we run with lots of logging, what can I send you that would help?
In this case I think I've probably drowned the relevant stuff in noise,
but assuming the autovaccum=off change does not stop it from happening
again, I will know what to send you when it re-occurs.

This sort of "getting stuck" has been happening pretty consistently for
me with out test suite and pg8.2. The test suite does a lot of DB
creation, hammering, tearing down and then creation again.

Mark


[BUGS] BUG #3847: plpython trigger caches table structure - doesn't see new / changed columns

2007-12-31 Thread Mark Reid

The following bug has been logged online:

Bug reference:  3847
Logged by:  Mark Reid
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.2.5
Operating system:   Debian Linux
Description:plpython trigger caches table structure - doesn't see
new / changed columns
Details: 

If a column is added, dropped, then re-added (all within a transaction), a
plpython trigger function loses track of the column and throws an error when
trying to access it.  Here is the best minimal test case I could come up
with:
- TEST 1 
BEGIN;
CREATE TABLE clarence (pick_id int, foo varchar, bar varchar, baz varchar);
INSERT INTO clarence (pick_id) values (1454);
INSERT INTO clarence (pick_id) values (1453);

CREATE OR REPLACE FUNCTION debug_columns()
  RETURNS "trigger" AS
$BODY$
 
if TD["event"].upper() == "UPDATE":
   plpy.notice('test4: %s' % (TD["new"]["test4"]))
 
return "OK"
 
$BODY$
  LANGUAGE 'plpythonu' VOLATILE;
 
CREATE TRIGGER clarence_debug_trg
  AFTER UPDATE
  ON clarence
  FOR EACH ROW
  EXECUTE PROCEDURE debug_columns();
 
 
-- This works
alter table clarence add column test4 varchar;
update clarence set test4=12 where pick_id=1454;
alter table clarence drop column test4;

-- This does not work
alter table clarence add column test4 varchar;
update clarence set test4=12 where pick_id=1454; -- this creates a
problem...  plpgsql seems to work fine.
alter table clarence drop column test4;

ROLLBACK;
-- END TEST 1 --


Here is another test case that may come in handy (it enumerates the names of
all the columns in the "new" record):

 TEST 2 ---
BEGIN;
CREATE TABLE clarence (pick_id int, foo varchar, bar varchar, baz varchar);
INSERT INTO clarence (pick_id) values (1454);
INSERT INTO clarence (pick_id) values (1453);

CREATE OR REPLACE FUNCTION debug_columns()
  RETURNS "trigger" AS
$BODY$
 
if TD["event"].upper() == "UPDATE":
   for key, val in TD["new"].iteritems():
  plpy.notice('%s = [%s]' % (key, val))
 
return "OK"
 
$BODY$
  LANGUAGE 'plpythonu' VOLATILE;
 
CREATE TRIGGER clarence_debug_trg
  AFTER UPDATE
  ON clarence
  FOR EACH ROW
  EXECUTE PROCEDURE debug_columns();
 
 
-- This works
alter table clarence add column test4 varchar;
update clarence set test4=12 where pick_id=1454;
alter table clarence drop column test4;

-- This does not work
--alter table clarence add column test4 varchar;
--update clarence set test4=12 where pick_id=1454; -- this creates a
problem...  plpgsql seems to work fine.
--alter table clarence drop column test4;

-- This works
alter table clarence add column test5 varchar;
update clarence set test5=12 where pick_id=1454;
alter table clarence drop column test5;

ROLLBACK;
 END TEST 2 ---


I would be willing to take a stab at fixing this, but would need someone
more experienced to give me some pointers as to how to go about it (i've
never looked at the PG source).

-Mark.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [BUGS] BUG #3847: plpython trigger caches table structure - doesn't see new / changed columns

2008-01-01 Thread Mark Reid
The trigger function does not recognize the "test4" column the second time
it is added - the update throws an error.

On Jan 1, 2008 11:17 AM, Tom Lane <[EMAIL PROTECTED]> wrote:

> "Mark Reid" <[EMAIL PROTECTED]> writes:
> > If a column is added, dropped, then re-added (all within a transaction),
> a
> > plpython trigger function loses track of the column and throws an error
> when
> > trying to access it.  Here is the best minimal test case I could come up
> > with:
>
> The cases you are saying work and don't work are exactly the same:
>
> > -- This works
> > alter table clarence add column test4 varchar;
> > update clarence set test4=12 where pick_id=1454;
> > alter table clarence drop column test4;
>
> > -- This does not work
> > alter table clarence add column test4 varchar;
> > update clarence set test4=12 where pick_id=1454; -- this creates a
> > problem...  plpgsql seems to work fine.
> > alter table clarence drop column test4;
>
> Please be clearer.
>
>regards, tom lane
>


Re: [BUGS] BUG #3847: plpython trigger caches table structure - doesn't see new / changed columns

2008-01-02 Thread Mark Reid
Works perfectly on my test case.  Thanks!

On Jan 1, 2008 8:07 PM, Tom Lane <[EMAIL PROTECTED]> wrote:

> "Mark Reid" <[EMAIL PROTECTED]> writes:
> > The trigger function does not recognize the "test4" column the second
> time
> > it is added - the update throws an error.
>
> Try this patch:
> http://archives.postgresql.org/pgsql-committers/2008-01/msg00016.php
>
>regards, tom lane
>


Re: [BUGS] BUG #3847: plpython trigger caches table structure - doesn't see new / changed columns

2008-01-02 Thread Mark Reid
We've run our real-life test cases and they've all worked properly using
this patch too.  Thanks again.
-Mark.

On Jan 1, 2008 8:07 PM, Tom Lane <[EMAIL PROTECTED]> wrote:

> "Mark Reid" <[EMAIL PROTECTED]> writes:
> > The trigger function does not recognize the "test4" column the second
> time
> > it is added - the update throws an error.
>
> Try this patch:
> http://archives.postgresql.org/pgsql-committers/2008-01/msg00016.php
>
>regards, tom lane
>


Re: [BUGS] BUG #3833: Index remains when table is dropped

2008-01-03 Thread Mark Kirkwood
I encountered this bug recently - and thought I'd have a try at seeing 
what might fix it.


Taking an exclusive lock on the to-be-dropped table immediately (i.e in 
RemoveRel) seems to be enough to prevent the drop starting while an 
index is being created in another session. So it "fixes" the issue - 
possible objections that I can think of are:


1/ Not a general solution to multi session dependent drop/create of 
objects other than tables (unless we do 2/)
2/ Using this approach in all object dropping code may result in 
deadlocks (but is this worse than dangling/mangled objects?)


Now, I'm conscious that there could be other show stopper reasons for 
*not* doing this that I have not thought of, but figured I'd post in 
case the idea was useful. Thoughts?


Cheers

Mark
*** src/backend/commands/tablecmds.c.orig	Wed Jan  2 13:58:05 2008
--- src/backend/commands/tablecmds.c	Wed Jan  2 13:46:43 2008
***
*** 514,519 
--- 514,522 
  	object.objectId = relOid;
  	object.objectSubId = 0;
  
+ 	//Try a lock here!
+ 	LockRelationOid(relOid, ExclusiveLock);
+ 
  	performDeletion(&object, behavior);
  }
  

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[BUGS] BUG #4059: Vacuum full not always cleaning empty tables

2008-03-25 Thread Mark Steben

The following bug has been logged online:

Bug reference:  4059
Logged by:  Mark Steben
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.2.5
Operating system:   Linux redhat
Description:Vacuum full not always cleaning empty tables
Details: 

I sent an email to 'pgsql-bugs@postgresql.org' entitled
 'Possible bug with VACUUM FULL' and also an addendum
But neglected to also send this bug report form.
You can refer to those emails; in a nutshell, I have found that VACUUM FULL
will not clean dead tuples out of an empty table when other queries are
accessing the database. Test plans/scenarios are included.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #4059: Vacuum full not always cleaning empty tables

2008-03-26 Thread Mark Steben
Yes, there are transactions that predate the start time of the vacuum full.
However those transactions do not touch the table being vacuumed, they have 
nothing to do with it.  Is this the expected behavior?

Mark Steben
Senior Database Administrator
@utoRevenueT 
A Dominion Enterprises Company
480 Pleasant Street
Suite B200
Lee, MA 01238
413-243-4800 Home Office 
413-243-4809 Corporate Fax
[EMAIL PROTECTED]

Visit our new website at 
www.autorevenue.com
 
IMPORTANT: The information contained in this e-mail message is confidential
and is intended only for the named addressee(s). If the reader of this
e-mail message is not the intended recipient (or the individual responsible
for the delivery of this e-mail message to the intended recipient), please
be advised that any re-use, dissemination, distribution or copying of this
e-mail message is prohibited.  If you have received this e-mail message in
error, please reply to the sender that you have received this e-mail message
in error and then delete it.  Thank you.


-Original Message-
From: Alvaro Herrera [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 25, 2008 9:24 PM
To: Mark Steben
Cc: pgsql-bugs@postgresql.org
Subject: Re: [BUGS] BUG #4059: Vacuum full not always cleaning empty tables

Mark Steben wrote:

> I sent an email to 'pgsql-bugs@postgresql.org' entitled
>  'Possible bug with VACUUM FULL' and also an addendum
> But neglected to also send this bug report form.
> You can refer to those emails; in a nutshell, I have found that VACUUM
FULL
> will not clean dead tuples out of an empty table when other queries are
> accessing the database. Test plans/scenarios are included.

Yes, if other sessions have open transactions whose start time predate
the start time of the transaction that deletes of tuples, those tuples
cannot be removed by VACUUM.  This is known and expected.  Is this
what's happening here?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #3833: Index remains when table is dropped

2008-04-28 Thread Mark Kirkwood

Bruce Momjian wrote:

The comment I have from Tom Lane on this patch is:

band-aid solution to just one aspect of problem ...

so I am afraid I am going to have to reject it.  Sorry.




No problem, thanks for passing along the feedback - I was primarily
interested in that (as I figured there was probably a reason why this
had not been tried!).

Best wishes

Mark


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] 7.3b2 initdb fails with semaphore error

2002-10-01 Thread Mark Stosberg


Hello,

I just tried installing 7.3b2 on FreeBSD 4.6.2 and received the
following error:


oot@asana> su -l pgsql -c initdb
The files belonging to this database system will be owned by user "pgsql".
This user must also own the server process.

The database cluster will be initialized with locale C.

creating directory /usr/local/pgsql/data... ok
creating directory /usr/local/pgsql/data/base... ok
creating directory /usr/local/pgsql/data/global... ok
creating directory /usr/local/pgsql/data/pg_xlog... ok
creating directory /usr/local/pgsql/data/pg_clog... ok
creating template1 database in /usr/local/pgsql/data/base/1... IpcSemaphoreCreate: 
semget(key=1, num=17, 03600) failed: No space left on device

This error does *not* mean that you have run out of disk space.

It occurs when either the system limit for the maximum number of
semaphore sets (SEMMNI), or the system wide maximum number of
semaphores (SEMMNS), would be exceeded.  You need to raise the
respective kernel parameter.  Alternatively, reduce PostgreSQL's
consumption of semaphores by reducing its max_connections parameter
(currently 32).

The PostgreSQL Administrator's Guide contains more information about
configuring your system for PostgreSQL.


initdb failed.
###

I think the bug here is that this message advertises that I can change the
"max_connections" parameter to address this, but there does not appear to be a way
to do this.  The max_connections option is located on my system at
/usr/local/share/postgresql/postgresql.conf.sample (prior to initdb)

However, initdb appears to only copy the file and not actually
use its parameters.

If it /is/ possible to adjust max_connections prior or during the "initdb"
stage, I think there is a documentation bug-- the location of that
documentation should be spit out along with the above error message, along with
being accessible through "initdb --help" and perhaps elsewhere.

I look forward to another great release. Thanks!

-mark

http://mark.stosberg.com/


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [BUGS] 7.3b2 initdb fails with semaphore error (solved)

2002-10-02 Thread Mark Stosberg

On Tue, 1 Oct 2002, Tom Lane wrote:

> Mark Stosberg <[EMAIL PROTECTED]> writes:
> > I think the bug here is that this message advertises that I can change
> > the "max_connections" parameter to address this, but there does not
> > appear to be a way to do this.  The max_connections option is located
> > on my system at /usr/local/share/postgresql/postgresql.conf.sample
> > (prior to initdb)
>
> Hmm ... actually, I don't think the max_connections parameter is used
> during standalone operation.  It looks like the code uses a hardwired
> value of "16".  We could reduce that (there's probably no good reason
> why it's not "1"), but I suspect your SEMMAX parameter is so small
> it will fail anyway :-(
>
> Would you try changing "16" to "1" in InitCommunication() in
> src/backend/utils/init/postinit.c, and see if that helps on your
> setup?

Tom,

I tried this change, and was able to successfully "initdb" after that,
and then run "psql" after that. I'm running this installation on a home
machine for light use, so I may not need 16 backends anyway.

If you're correct that there is no need to have more than 1 backend
during "initdb", then perhaps this could be turned into a patch. My
simple patch is below:

 -mark

http://mark.stosberg.com/



--- postinit.c.orig Wed Oct  2 12:56:13 2002
+++ postinit.c  Wed Oct  2 12:56:42 2002
@@ -176,7 +176,7 @@
 * postmaster.  Create private "shmem" and semaphores.  Setting
 * MaxBackends = 16 is arbitrary.
 */
-   CreateSharedMemoryAndSemaphores(true, 16, 0);
+   CreateSharedMemoryAndSemaphores(true, 1, 0);
}
 }






---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[BUGS] Problem with building PostgreSQL 7.4 from source

2003-11-28 Thread Mark Iszler
I am having some difficulty with building PostgreSQL 7.4 from source code.
I am not using any special options -- just running ./configure and gmake
(using GNU Make version 3.79.1).  I am using an x86 based system running
Slackware Linux 8.1 (kernel version 2.4.18).
Running the regression tests with 'gmake check' produces the following 
messages:
 == creating temporary installation==
 == initializing database system   ==
 == starting postmaster==

About 45 seconds later the following messages are displayed:
 running on port 65432 with pid 3192
 == creating database "regression" ==
 
/pgsrc/postgresql-7.4/src/test/regress/./tmp_check/install//usr/local/pgsql\
 /bin/createdb: relocation error: 
/pgsrc/postgresql-7.4/src/test/regress/./\
 tmp_check/install//usr/local/pgsql/bin/createdb: undefined symbol: 
get_progname

There are two issues here:
 1. The long pause after the "starting postmaster" message.
 2. The error messages from createdb.
It appears that createdb was copied from postgresql-7.4/src/bin/scripts 
for the
temporary installation.  I noticed that this is now a compiled C program
instead of a shell script like in version 7.3.4.  I believe that it was not
properly linked during the build.  In fact, none of the programs in this
directory appear to be properly linked.  They all produce the same kind 
of error
message.

To see if I was on the right track, I modified the Makefile in the scripts
directory, explicitly adding "$(libpq_srcdir)/path.o" as a parameter to the
"$(CC)" compilation command.  After deleting all of the executable 
programs from
the scripts directory and running "gmake" to rebuild them, I tried the 
regression
tests (gmake ckeck) again.  The "relocation error" and "undefined 
symbol" errors
went away.  However, now a different error message was displayed and the
regression tests were aborted.  (I am away from that computer as I write 
this and
I don't remember the error message.  I will send more info on that later.)

I hope this all helps.  Let me know if you need any more info.

Mark Iszler

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[BUGS] SELECT FOR UPDATE differs inside and outside a pl/pgsql function (7.4)

2003-12-17 Thread Mark Shewmaker

Should there be a difference between the behavior of a "select for
update" typed into psql directly versus "select into variable_name for
update" done within a function?

In other words:  Is this a bug or a user misunderstanding:

1.  Run the following commands to set up a table called
mytable and a function called myfunction:

--

create table mytable (a int);
insert into mytable values (1);
insert into mytable values (2);
insert into mytable values (3);

create or replace function myfunction() returns integer as '
DECLARE
myrow mytable%ROWTYPE;
BEGIN
LOOP
  select * into myrow from mytable limit 1 for update;
  if found then exit;
  end if;
END LOOP;
return myrow.a;
end; ' language 'plpgsql';
--

2.  Then open up two psql sessions and run the following commands:

+-+--+
| psql Session A  | psql Session B   |
+-+--+
|begin transaction;   |  |
| | begin transaction;   |
|select * from mytable limit 1 for update;|  |
| | select myfunction(); |
|delete from mytable where a=1;   |  |
|commit;  |  |
+-+--+

Session B's "select myfunction();" will hang, and it will continue to
hang even after session A commits.  Is this expected behavior?

There are two ways to have session B not hang after the commit:

1.  Don't do the "delete from mytable where a=1;".

Session B's "select myfunction();" will then return after
Session A commits, and with a value of 1.

Or,

2.  Instead of running "select myfunction();" in Session B,
run two manual "select * from mytable limit 1 for update;"s.

The first manual select-for-update will hang until Session A's
transaction commits, after which the second manual select-for-update
in session A will succeeds.

This one really confuses me--should a function not be able
to find a row when a manual command can?

So I guess I'm curious as to:

1.  Whether this is a bug or not.  I'm guessing yes.

(I expected the multiple select-for-update attempts in the
function to the same behavior as multiple select-for-update's
done manually.  That is, I expected both types of B's
select-for-updates selecting locked rows to hang until session A's
commit, immediately fail to find any row, and then succeed on the
next try.  It would be nice if session B's first select were to
have transparently succeeded on the a=2 row, something which I think
would be a legal thing to happen, but as it's not what the
documentation implies would happen I didn't expect that.  In any
event I did not expect the select-for-update within pgsql to
continually fail to find a row.)

2.  If there's a better way to have multiple transactions lock rows
with select-for-update without any chance of the transaction
erroring out, (such as would occur with serializable--and that
would mean I'd have to have to move more of the logic to the
application.)

(I don't know of a way to avoid even the busy-looping--though I
could call sleep functions from plperl or something to lower the
cpu load.  BTW, it would be nice to have some plpgsql-native sleep
function just to more easily test for problems like this.)

3.  If there's some really elegant solution out there, such as a
way to do a "select for update where not locked" to search for
rows no one has a conflicting lock on.  (To me this would seem
to be the best of all possible solutions.)

I'm running version() "PostgreSQL 7.4 on i686-pc-linux-gnu, compiled
by GCC gcc (GCC) 3.2 20020903 (Red Hat Linux 8.0 3.2-7)".

-- 
Mark Shewmaker
mark at primefactor dot com




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [BUGS] SELECT FOR UPDATE differs inside and outside a pl/pgsql

2003-12-17 Thread Mark Shewmaker
On Wed, 2003-12-17 at 14:02, Tom Lane wrote:
> Mark Shewmaker <[EMAIL PROTECTED]> writes:
> > In other words:  Is this a bug or a user misunderstanding:
> 
> You've got the function doing
> 
> > LOOP
> >   select * into myrow from mytable limit 1 for update;
> >   if found then exit;
> >   end if;
> > END LOOP;
> 
> which means it will loop infinitely if the SELECT finds no row.
> 
> Because "LIMIT 1" executes before "FOR UPDATE" does, it is possible that
> the first row returned by the query is rejected by FOR UPDATE (in this
> case because it was deleted by an already-committed transaction), and
> then the query will not examine any more rows because of the LIMIT.
> 
> I think you could get the behavior you want with
> 
>   select * into myrow from mytable for update;
> 
> that is, just rely on the implicit LIMIT 1 associated with SELECT INTO
> rather than writing one in the query.  Then you will get the first row
> that passes the FOR UPDATE restriction, which I think is what you're
> after.

You're right:  Changing the loop in myfunction from the above to use
"select * into myrow from mytable for update;" results in the behavior I
was after.

What's more, the the whole loop can be replaced with that single "select
* into myrow from mytable for update;", and it works just perfectly!

(I think you may have been implying that.)

If the first transaction holds the first available row locked, a second
transaction running this function just waits.  When the first
transaction deletes the row and then commits, the second transaction
suddenly selects the next available row and goes on its merry way.

All very excellent!

> There has been some argument about whether FOR UPDATE shouldn't execute
> before LIMIT, but IIRC there are nasty corner cases in that case too.
> So it's hard to say if this should be considered a bug or not.

Okay.  Well, I wouldn't want to suggest anything that would cause more
problems, especially given that there's a simple and effective
solution/workaround.  ("Workaround" being a bit strong if this isn't
even clearly a bug.)

However, your answer does leave some other questions open as well as
opening up more:

1.  There's still the original outstanding oddity that there was a 
*difference* in the first place between the results from a "select
limit 1 for update" command run directly within pgsql and a
"select into variable limit 1 for update" run within a plpgsql
function that's run directly within pgsql.

The original function never could lock a row, whereas the command
run directly could obtain a lock on the second try:

Infinite loop in function:  Never locks a row.
First attempt run directly:  No row locked.
Second attempt run directly:  One row successfully locked.

If a "FOR UPDATE executes before LIMIT" rule stopped the function
from ever locking a row, it's still curious why didn't it stop the
direct command from ever locking a row as well.  That still looks
troublingly inconsistent.  (Even though it's easy to get the desired
result in either case admittedly.)

2.  There's now a difference between the suggested "select * into
myrow from mytable for update;" run within a function, with its
JUST GREAT behavior, and the original "select * from mytable
limit 1 for update;":

   New command run from function, first try:  One row successfully
  locked.
   Command run directly, first try:  No row locked.
   Command run directly, second try:  One row locked successfully.

I suppose my main question here is:  Can I rely on this unexpected
(by me) and delightful feature of the suggested "select * into
myrow from mytable for update;" to not require multiple runs in
this sort of situation?

3.  If there are nasty deadlocks in FOR UPDATE running before LIMIT,
and there are these slight inconsistencies in how select-for-update
works depending on where the command is run, I suppose I wonder
if some of the inconsistencies can trigger some of these problems.

(Very vague, I know.  I don't have any problems to report or ask
about related to this, but.. figured I should bring it up in case
it causes someone to suddenly realize or think of where a real
deadlock might be.)

4.  As an aside, since another way to directly solve the problem would
be a way to only select rows that aren't locked, are there any
thoughts on having that sort of functionality in a future revision?

(From looking things up online, Oracle 8.1 and onward has something
like this called "Skip Locked&qu

Re: [BUGS] SELECT FOR UPDATE differs inside and outside a pl/pgsql

2003-12-18 Thread Mark Shewmaker
On Wed, 2003-12-17 at 19:57, Tom Lane wrote:
> Mark Shewmaker <[EMAIL PROTECTED]> writes:
> > If a "FOR UPDATE executes before LIMIT" rule stopped the function
> > from ever locking a row, it's still curious why didn't it stop the
> > direct command from ever locking a row as well.
> 
> I think it would.  Did you try the test the other way around (with the
> direct command being blocked behind someone who deletes the first row)?

Yes, or at least I've done the test that I think you're asking about.
(See Session_A-with-early-delete occurring with Session B_1 below.)

To be as clear as possible as to the actual tests run:

+-+---+
|Run this in psql:| Along with one of these in a second psql session: |
+-+-+-+---+
| Session A   | Session B_1 | Session B_2 | Session B_3   |
+-+-+-+---+
| | | |   |
| # begin | | |   |
|   transaction   | | |   |
| | | |   |
| | # begin | # begin | # begin   |
| |   transaction;  |   transaction   |   transaction |
| | | |   |
| # select * from | | |   |
|   mytable limit | | |   |
|   1 for update; | | |   |
| (returns a=1)   | | |   |
| | | |   |
|(NOTE: if the| | |   |
|following delete | | |   |
|statement is | | |   |
|moved here,before| | |   |
|session B_?'s| | |   |
|selects, it has  | | |   |
|no effect on | | |   |
|session B_?'s| | |   |
|hangs & results) | | |   |
| | | |   |
| | # select * from | # select| # select  |
| |   mytable limit |   myfunction(); |   myfunction2();
| |   1 for update; | (hangs) | (hangs)   |
| | (hangs) | (hangs) | (hangs)   |
| # delete from   | (hangs) | (hangs) | (hangs)   |
|   mytable where | (hangs) | (hangs) | (hangs)   |
|   a=1;  | (hangs) | (hangs) | (hangs)   |
| (succeeds with  | (hangs) | (hangs) | (hangs)   |
| "DELETE 1") | (hangs) | (hangs) | (hangs)   |
| | (hangs) | (hangs) | (hangs)   |
| # commit;   | (hangs) | (hangs) | (hangs)   |
| (succeeds with  | (hangs) | (hangs) | (hangs)   |
| "COMMIT")   | (hangs) | (hangs) | (hangs)   |
| | (returns with   | (hangs) | (returns with |
| | no rows)| (hangs) |  a=2) |
| | | (hangs) |   |
| | # select * from | (hangs) | # delete from |
| |   mytable limit | (hangs) |   mytable |
| |   1 for update; | (hangs) |   where a=2;  |
| | (returns with   | (hangs) | (succeeds with|
| | a=2)| (hangs) | "DELETE 1")   |
| | | (hangs) |   |
| | # delete from   | (hangs) | # commit; |
| |   mytable where | (hangs) | (succeeds with|
| |   a=2;  | (hangs) | "COMMIT") |
| | (succeeds with  | (hangs) |   |
| | "DELETE 1") | (hangs) |   |
| | | (hangs) |   |
| | # commit;   | (hangs) |   |
| | (succeeds with  | (hangs) |   |
| | "COMMIT")   | (hangs) |   |
| | | 

[BUGS] ecpg c++ scope operator lost

2003-12-19 Thread Mark Pether
Ecpg pretty prints my code causing compile errors.

ie. code to compile:

EXEC SQL BEGIN DECLARE SECTION;
   char bound_waiting_state =
   static_cast(Processor::waiting);
EXEC SQL END DECLARE SECTION;
turns into:

/* exec sql begin declare section */
#line 90 "Schedule.pgcc"
  char  bound_waiting_state  = static_cast < char > ( Processor : : 
waiting ) ;
/* exec sql end declare section */
#line 93 "Schedule.pgcc"

The scope operator becomes space padded and compiler barfs.

Work around is to initialise variables outside of declare section.

Regards,
Mark Pether.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [BUGS] ecpg c++ scope operator lost

2003-12-22 Thread Mark Pether
If that's the case then you'd better change your documentation...

PostgreSQL 7.3 Documentation

Chapter 4. ECPG - Embedded SQL in C
This chapter describes the embedded SQL package for PostgreSQL. It works 
with C and *C++.* It was written by Linus Tolke (<[EMAIL PROTECTED] 
<mailto:[EMAIL PROTECTED]>>) and Michael Meskes (<[EMAIL PROTECTED] 
<mailto:[EMAIL PROTECTED]>>).

Regards,
Mark Pether.
Michael Meskes wrote:

On Wed, Dec 17, 2003 at 03:14:31PM -0500, Mark Pether wrote:
 

Ecpg pretty prints my code causing compile errors.
   

Please note that ecpg is a precompiler for embedded SQL in C not C++.

Michael
 



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[BUGS] Bug report on Solaris 8/x86

2004-08-13 Thread Mark Round
Hi,
I am trying to compile Postgres-8.0.0beta1 on Solaris 8/x86, using Sun's 
C compiler, and get the following error which told me to contact you :-

"../../../../src/include/storage/s_lock.h", line 654: #error: PostgreSQL 
does not have native spinlock support on this platform.  To continue the 
compilation, rerun configure using --disable-spinlocks.  However, 
performance will be poor.  Please report this to [EMAIL PROTECTED]

My system is as follows :-
bash-2.03$ uname -a
SunOS daedal 5.8 Generic_117351-03 i86pc i386 i86pc
bash-2.03$ cc -V
cc: Sun C 5.5 Patch 112761-09 2004/06/08
usage: cc [ options] files.  Use 'cc -flags' for details
environment variables used :
LD_OPTIONS="-R/opt/csw/lib -L/opt/csw/lib"
CC="cc -fast -xarch=386 -I/opt/csw/include"
./configure --prefix=/opt/csw/postgres  --with-pam --with-openssl 
--enable-multibyte --with-CXX --with-includes=/opt/csw/include/openssl

Any suggestions ?
Thanks in advance,
-Mark
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [BUGS] We are not following the spec for HAVING without GROUP

2005-03-09 Thread Mark Shewmaker
On Wed, 2005-03-09 at 21:21 -0500, Tom Lane wrote: 
> Comments?  Can anyone confirm whether DB2 or other databases allow
> ungrouped column references with HAVING?

In Sybase:

1> select 2 as id, max(myfield) from mytable where 2=1
2> go
 id
 --- --
   2 NULL

(1 row affected)
1> select 2 as id, max(myfield) from mytable having 2=1
2> go
 id
 --- --

(0 rows affected)

-- 
Mark Shewmaker
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [BUGS] [HACKERS] We are not following the spec for HAVING without GROUP BY

2005-03-10 Thread Mark Shewmaker
On Thu, Mar 10, 2005 at 12:44:50PM -0500, Tom Lane wrote:
> 
> Would those of you with access to other DBMSes try this:
> 
> create table tab (col integer);
> select 1 from tab having 1=0;
> select 1 from tab having 1=1;
> insert into tab values(1);
> insert into tab values(2);
> select 1 from tab having 1=0;
> select 1 from tab having 1=1;
> 
> I claim that a SQL-conformant database will return 0, 1, 0, and 1 rows
> from the 4 selects --- that is, the contents of tab make no difference
> at all.

Sybase ASE version 12.5.2 returns 0, 0, 0, and 1 rows.

A plain "select 1 from tab" returns zero rows when tab is empty.

-- 
Mark Shewmaker
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [BUGS] [HACKERS] We are not following the spec for HAVING without GROUP

2005-03-13 Thread Mark Kirkwood
Tom Lane wrote:
Would those of you with access to other DBMSes try this:
create table tab (col integer);
select 1 from tab having 1=0;
select 1 from tab having 1=1;
insert into tab values(1);
insert into tab values(2);
select 1 from tab having 1=0;
select 1 from tab having 1=1;
I claim that a SQL-conformant database will return 0, 1, 0, and 1 rows
from the 4 selects --- that is, the contents of tab make no difference
at all.  (MySQL returns 0, 0, 0, and 2 rows, so they are definitely
copying our mistake...)
Firebird 1.5.1 FreeBSD 5.3
Database:  test
SQL> drop table tab;
SQL> create table tab (col integer);
SQL> select 1 from tab having 1=0;
SQL> select 1 from tab having 1=1;

   1
SQL> insert into tab values(1);
SQL> insert into tab values(2);
SQL> select 1 from tab having 1=0;
SQL> select 1 from tab having 1=1;

   1
SQL>
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[BUGS] BUG #1671: Long interval string representation rejected

2005-05-16 Thread Mark Dilger

The following bug has been logged online:

Bug reference:  1671
Logged by:  Mark Dilger
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.2
Operating system:   Mandrake Linux 9.1 (2.4.21-0.13mdkenterprise #1 SMP)
Description:Long interval string representation rejected
Details: 



bash-2.05b$ psql
Welcome to psql 8.0.2, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

mark=# select '4 millenniums 5 centuries 4 decades 1 year 4 months 4 days 17
minutes 31 seconds'::interval;
ERROR:  invalid input syntax for type interval: "4 millenniums 5 centuries 4
decades 1 year 4 months 4 days 17 minutes 31 seconds"
mark=# select '4 millenniums 5 centuries 4 decades 1 year 4 months 4 days 17
minutes'::interval;
 interval
---
 4541 years 4 mons 4 days 00:17:00
(1 row)



It appears that any string representation of an interval of length greater
than 76 is rejected.  (76 = 51 + 25 = MAXDATELEN + MAXDATEFIELDS).  This
appears to be a limitation enforced within function interval_in() in the
file src/backend/utils/adt/timestamp.c

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [BUGS] pg_dump table ordering bug [8.0.1]

2005-05-22 Thread Mark Shewmaker
On Thu, 2005-03-24 at 17:27 -0500, Tom Lane wrote:
> Andreas Lange <[EMAIL PROTECTED]> writes:
> > Our upgrade from 7.4.6 to 8.0.1 only had one small glitch. Two tables 
> > got dumped in the wrong order (before their dependecies) and had to get 
> > their contents added manually after the restore. I've atleast isolated 
> > the part where things go wrong.
> 
> I don't think this is a pg_dump bug: the problem is you are abusing
> check constraints to emulate foreign key constraints.  pg_dump has no
> way to know what those functions are doing and therefore no way to
> realize that the check constraints impose a data load ordering
> dependency.  Furthermore, the check constraints are fundamentally wrong
> anyway because they don't create a two-way relationship --- that is,
> altering the referenced tables won't raise an error if the check is now
> violated for something in the referencing table.

(Sorry for the long delay here!)

Could this be resolved simply by having pg_dump write out all constraint
statements after all insert and trigger statements?

Then no data-order-dependent constraints will be triggered when the dump
is loaded, and even constraints that aren't met when when the dump is
taken won't be triggered when the data is re-loaded.

(I would say that would be a feature not a bug, since as I understand it
the point of pg_dump is to replicate a db setup, with it also being a
separate sanity checker merely a possible benefit.  And in any event, if
a few "special" rows don't meet constraints, having had to have been
entered before the constraints were put into place, those rows could
still be restored without problems.  Whether that's indicative of poor
schema design is a separate issue.)

Are there any downsides to changing the order of pg_dump output with
respect to constraints?  (Versus requiring users to alter their schema
design.)

-- 
Mark Shewmaker
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[BUGS] pg_autovacuum: short, wide tables

2005-07-07 Thread mark reid

Hi,

I've been using pg_autovacuum for a while, and for the most part it's 
been great.  There's one case in my system where it won't run on a 
particular type of table, even though the table apparently needs it.


I have a table called "properties" that has key->value pairs.  Usually 
there are only a handful of rows, some of which are updated relatively 
frequently compared to the number of rows (hundreds or thousands of 
times per day).  The problem is that some of the rows have long strings 
for their value (on the order of a few hundred kilobytes), so if I 
understand correctly, the bulk of the data gets offloaded to a toast 
table. 

What I believe is happening is that the main table doesn't meet the 
minimum activity level for pg_autovacuum based on size / update 
frequency, but the toast table would, though it isn't specifically 
checked by pg_autovacuum.  The result is that the toast table grows 
really big before triggering autovacuum (or until I manually vacuum the 
"properties" table).  Not the end of the world, obviously, but might be 
a "gotcha" for some people with similar situations.


Below is a snippet of output from a run of vacuumdb --full --analyze 
--verbose that should illustrate the problem.


-Mark.

Table Def:
   Table "schema_name.properties"
Column |   Type| Modifiers
+---+---
name   | character varying |
value  | character varying |
Indexes:
   "properties_name_key" unique, btree (name)


Vacuum verbose output:

INFO:  vacuuming "schema_name.properties"
INFO:  "properties": found 1361 removable, 8 nonremovable row versions 
in 172 pages

DETAIL:  0 dead row versions cannot be removed yet.
Nonremovable row versions range from 63 to 1705 bytes long.
There were 4827 unused item pointers.
Total free space (including removable row versions) is 1376288 bytes.
164 pages are or will become empty, including 0 at the end of the table.
172 pages containing 1376288 free bytes are potential move destinations.
CPU 0.00s/0.00u sec elapsed 0.06 sec.
INFO:  index "properties_name_key" now contains 8 row versions in 15 pages
DETAIL:  1361 index row versions were removed.
8 index pages have been deleted, 8 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO:  "properties": moved 8 row versions, truncated 172 to 1 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.02 sec.
INFO:  index "properties_name_key" now contains 8 row versions in 15 pages
DETAIL:  8 index row versions were removed.
8 index pages have been deleted, 8 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  vacuuming "pg_toast.pg_toast_10043014"
INFO:  "pg_toast_10043014": found 21052 removable, 24 nonremovable row 
versions in 21100 pages

DETAIL:  0 dead row versions cannot be removed yet.
Nonremovable row versions range from 1117 to 2030 bytes long.
There were 63278 unused item pointers.
Total free space (including removable row versions) is 172044376 bytes.
21093 pages are or will become empty, including 0 at the end of the table.
21096 pages containing 172044264 free bytes are potential move destinations.
CPU 0.41s/0.06u sec elapsed 3.63 sec.
INFO:  index "pg_toast_10043014_index" now contains 24 row versions in 
321 pages

DETAIL:  21052 index row versions were removed.
317 index pages have been deleted, 317 are currently reusable.
CPU 0.00s/0.01u sec elapsed 0.40 sec.
INFO:  "pg_toast_10043014": moved 24 row versions, truncated 21100 to 6 
pages

DETAIL:  CPU 0.32s/1.04u sec elapsed 5.27 sec.
INFO:  index "pg_toast_10043014_index" now contains 24 row versions in 
321 pages

DETAIL:  24 index row versions were removed.
317 index pages have been deleted, 317 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  analyzing "schema_name.properties"
INFO:  "properties": 1 pages, 8 rows sampled, 8 estimated total rows

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [BUGS] pg_autovacuum: short, wide tables

2005-07-09 Thread Mark Reid

Matthew T. O'Connor wrote:


Tom Lane wrote:


I think Mark is probably on to something.  The activity in the toast
table will show as deletes *in the toast table* ... and that activity
fails to show at all in the pg_stat_activity view, because it shows
only plain relations!  So unless autovacuum is ignoring the stats views
and going directly to the underlying stats functions, it cannot see
at all that there is excessive activity in the toast table.
 



I think I'm missing something here.  If I have a table t1 with a long 
text column, and I do an update on that text column, doesn't that show 
up as an update on table t1?  And when there are enough upd/del 
autovacuum will issue a VACUUM against t1, which will clean up the 
associated toast table, right?  So I think I must be missing 
something.  Could you please explain the problem in a little more detail.


I think the issue is that a single update to the main table causes a 
whole bunch of updates to the toast table.  So in my case (with the 
vacuum output attached previously), a thousand updates to the main table 
entails tens of thousands of updates to the toast table. 
INFO:  "properties": found 1361 removable, 8 nonremovable row versions
INFO:  "pg_toast_10043014": found 21052 removable, 24 nonremovable row 
versions


based on the default autovacuum thresholds, 21000 updates to a table 
with 24 rows should have triggered a vacuum on the toast table, which is 
why i pointed the finger that way originally.


-Mark.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [BUGS] pg_autovacuum: short, wide tables

2005-07-09 Thread Mark Reid






  
I'm not proposing it for 8.1 though ...

  
  
Well, why not?  Arguably Mark's problem is a bug, and it's not too late
to address bugs.
  

FWIW, I'd be satisfied with a warning in the autovacuum docs about this
corner case.

  
I'm not sure about the idea of not vacuuming the toast table when we
decide to vacuum the main table.
  

If you promote toast tables to autovacuum candidates that can be
vacuumed independently, I think autovacuum doesn't need to do both when
it does the main table.  This would potentially improve performance by
minimizing the amount of work that needs to be done when a vacuum is
necessary.  OTOH, you can't stop regular vacuum from including the
toast table otherwise way more people would start bugging you :)

-Mark.




[BUGS] BUG #1874: Non-Execute Privileges enforced on grant

2005-09-10 Thread Mark Diener

The following bug has been logged online:

Bug reference:  1874
Logged by:  Mark Diener
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.03
Operating system:   linux-i686
Description:Non-Execute Privileges enforced on grant
Details: 

It seems the EXECUTE privilege is not the only privilege that is being
checked during the execution of a PL/psql procedure language/function.

Only a superuser can execute non-trusted languages like python thus making
the python language unusable for average user.  Only for superusers.  What
happens when you want the python stored procedures to implement a layer of
security for standard users?

Then the pl/SQL language enforces SELECT/UPDATE/INSERT privileges on tables.
 It would appear intuitive that only the EXECUTE privilege should be
evaluated when a stored procedure executes.  By default, all superuser and
owner privileges should be allowed except for the EXECUTE privilege.

What happens when you want the pg/SQL stored procedures to implement a layer
of security for standard users and you don't want general users to have
select/update/insert privilege?  It is not an option to skip the select SQL
statement within stored procedures.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[BUGS] suggestion: fix 'now' -> CURRENT_TIMESTAMP

2005-09-23 Thread Mark Stosberg
Hello,

I'm in the process of migrating dozens of databases from 7.1 to 8.0.
It's been a great opportunity to become familiar with all the things
that have changed in the meantime.

Of of those things is the meaning 'now', as documented in the 7.4
release notes:
http://www.postgresql.org/docs/8.0/interactive/release-7-4.html
( Search for 'now' on the page to find the related docs. ).

When dumping from 7.1 and restoring into 8.0, working code is being
created in the cases I'm looking at, because these construct is put in
the dump file, and then imported verbatim:

  date("timestamp"('now'::text))
  "timestamp"('now'::text)

This these mean the exact same thing as:

CURRENT_DATE
CURRENT_TIMESTAMP

( But not the same thing as a bare 'now' ).

Why not make the translation on the fly, since using 'now' and
timestamp() are not recommended practices anyway ?

I have seen that PostgreSQL has already taken the liberty to rewrite
"serial" and other schema constructions when they are dumped or
imported, so I see no problem with rewriting code to equivalent, but
better style.

For now I'm doing find & replace on the dump files as a workaround.

Mark


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [BUGS] suggestion: fix 'now' -> CURRENT_TIMESTAMP

2005-09-24 Thread Mark Stosberg
On 2005-09-23, Tom Lane <[EMAIL PROTECTED]> wrote:
> Mark Stosberg <[EMAIL PROTECTED]> writes:
>> Why not make the translation on the fly, since using 'now' and
>> timestamp() are not recommended practices anyway ?
>
> Because we can't retroactively fix 7.1.

That fact hadn't escaped me. 

I was thinking that in *8*.1, the parsing of "CREATE TABLE" could be
altered to recognize the old syntax and improve it on the fly.

Since the meaning is identical, it seems like a reasonable improvement
to me.

This kind of rewriting is apparently already happening, because when I declare
a column as "serial", it's immediately translated into a different
representation. 

test=# create table t (c1 serial);
test=# \d t
   Table "public.t"
 Column |  Type   | Modifiers
+-+---
 c1 | integer | not null default nextval('public.t_c1_seq'::text)


Mark


---(end of broadcast)---
TIP 6: explain analyze is your friend


[BUGS] BUG #1970: Existing /etc/pam.d/postgresql clobbered by RPM install

2005-10-18 Thread Mark Gibson

The following bug has been logged online:

Bug reference:  1970
Logged by:  Mark Gibson
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.0.4
Operating system:   Redhat Enterprise Linux 4
Description:Existing /etc/pam.d/postgresql clobbered by RPM install
Details: 

Hello, I noticed that when installing the 8.0.4 RPM's
it replaced our existing /etc/pam.d/postgresql - this caused our system to
break temporarily, as it was missing the following line:

account  required  pam_stack.so service=system-auth

I don't know whether this is required for all systems or is just a
peculiarity of our setup, but could the RPM be changed to not clobber this
file in the future. I believe some RPM's install conflicting configs with
the .rpmnew extension. Cheers.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[BUGS] BUG #2031: Patch also required prior to ML3

2005-11-09 Thread Mark Gibson

The following bug has been logged online:

Bug reference:  2031
Logged by:  Mark Gibson
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1.0
Operating system:   AIX 5.3 (5300-01)
Description:Patch also required prior to ML3
Details: 

Hello,
I've just downloaded and compiled PostgreSQL 8.1.0
on our AIX 5.3 machine (which already has PG 8.0.4
installed).
AIX hasn't had any maintence levels applied yet,
so oslevel -r returns: 5300-01.
_SS_MAXSIZE has been set to 1025 manually, this was
required for 8.0.4.
But with 8.1.0 the patch for dynahash.c had to be
applied before the regression tests would run.
So the problem described in FAQ_AIX appears not
to be specific to AIX 5.3 ML3.
I can supply more info if required.
(Please reply directly to my email as I'm not
subscribed to any lists)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [BUGS] BUG #2031: Patch also required prior to ML3

2005-11-10 Thread Mark Gibson

Seneca Cunningham wrote:

Mark Gibson wrote:


AIX hasn't had any maintence levels applied yet,
so oslevel -r returns: 5300-01.
_SS_MAXSIZE has been set to 1025 manually, this was
required for 8.0.4.
But with 8.1.0 the patch for dynahash.c had to be
applied before the regression tests would run.
So the problem described in FAQ_AIX appears not
to be specific to AIX 5.3 ML3.



What do "instfix -ci | grep bos.rte.bind_cmds" and "instfix -ci | grep 
bos.adt.base" return?




$ instfix -ci | grep bos.rte.bind_cmds

IY58143:bos.rte.bind_cmds:5.3.0.1:5.3.0.10:+:Required fixes for AIX 5.3
IY59386:bos.rte.bind_cmds:5.3.0.10:5.3.0.10:=:ld -m does not produce any 
output
IY60158:bos.rte.bind_cmds:5.3.0.10:5.3.0.10:=:RESOLVEALL LDR_CNTRL 
variable setting
IY60696:bos.rte.bind_cmds:5.3.0.10:5.3.0.10:=:nm symbol size printing 
truncates
IY62441:bos.rte.bind_cmds:5.3.0.10:5.3.0.10:=:FVTR_SERV: dump -t +t does 
not give proper output

5300-01_AIX_ML:bos.rte.bind_cmds:5.3.0.10:5.3.0.10:=:AIX 5300-01 Update


$ instfix -ci | grep bos.adt.base

IY58689:bos.adt.base:5.3.0.10:5.3.0.10:=:cxref with -w num option fails
IY58712:bos.adt.base:5.3.0.10:5.3.0.10:=:cxref,cflow,lint fail of 
variable starts with dollar
IY59454:bos.adt.base:5.3.0.10:5.3.0.10:=:as produces renamed symbols in 
object file

IY60145:bos.adt.base:5.3.0.10:5.3.0.10:=:cflow outputs a warning msg.
IY60146:bos.adt.base:5.3.0.10:5.3.0.10:=:lint command outputs warning 
message
IY60681:bos.adt.base:5.3.0.10:5.3.0.10:=:make fails with more than one 
(.) in the source path
IY61149:bos.adt.base:5.3.0.10:5.3.0.10:=:do not invoke assembler with 
'cc -qarch=pwr4'

5300-01_AIX_ML:bos.adt.base:5.3.0.10:5.3.0.10:=:AIX 5300-01 Update


--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.

___ 
 
This email is intended for the named recipient. The information contained 
in it is confidential.  You should not copy it for any purposes, nor 
disclose its contents to any other party.  If you received this email 
in error, please notify the sender immediately via email, and delete it from
your computer. 
 
Any views or opinions presented are solely those of the author and do not 
necessarily represent those of the company. 
 
Registered Office: PO Box 14, Chartwell Dr, Wigston, Leicester. LE18 1AT 
__



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[BUGS] xml data type implications of no =

2010-05-24 Thread Mark Kirkwood
Today I ran into some interesting consequences of the xml data type 
being without an "=" operator. One I thought I'd post here because it 
has a *possible* planner impact. I'm not sure it is actually a bug as 
such, but this seemed the best forum to post in initially:


test=# \d bug
  Table "public.bug"
 Column |  Type   | Modifiers
+-+---
 id | integer |
 val| xml |

test=# explain select val::text from bug;
  QUERY PLAN
--
 Seq Scan on bug  (cost=0.00..58127.78 rows=1000278 width=32)


Note the width estimate. However a more realistic estimate for width is:

test=# select 8192/(reltuples/relpages) as width from pg_class where 
relname='bug';

  width
--
 394.130431739976

So we are going to massively underestimate the "size" of such a dataset. 
Now this appears to be a consequence of no "=" operator (std_typanalyze 
in analyze.c bails if there isn't one), so the planner has no idea about 
how wide 'val' actually is. I'm wondering if it is worth having at least 
an "=" operator to enable some minimal stats to be available for xml 
columns.


regards

Mark




Re: [BUGS] BUG #5469: regexp_matches() has poor behaviour and more poor documentation

2010-05-25 Thread Mark Kirkwood

On 26/05/10 15:51, Robert Haas wrote:


I'm not sure that it's very productive to refer to the behavior of our
code as insane.
   


Not meaning to single you out Robert, but typically folk are honest with 
their impression of the code without worrying about feather ruffling too 
much e.g: searching for "brain dead" in the pg-hackers archives returns 
a sizeable collection of reading material.


Personally I think it is good to be blunt about code we consider not 
well thought out or well behaved. Obviously in some cases such comments 
may turn out to be incorrect or misleading (e.g user error or not 
reading the docs), but I don't think we should try (too hard anyway) to 
smother any strong criticism.


regards

Mark


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] xml data type implications of no =

2010-05-26 Thread Mark Kirkwood

On 25/05/10 16:43, Mark Kirkwood wrote:
Today I ran into some interesting consequences of the xml data type 
being without an "=" operator. One I thought I'd post here because it 
has a *possible* planner impact. I'm not sure it is actually a bug as 
such, but this seemed the best forum to post in initially:


test=# \d bug
  Table "public.bug"
 Column |  Type   | Modifiers
+-+---
 id | integer |
 val| xml |

test=# explain select val::text from bug;
  QUERY PLAN
--
 Seq Scan on bug  (cost=0.00..58127.78 rows=1000278 width=32)


Note the width estimate. However a more realistic estimate for width is:

test=# select 8192/(reltuples/relpages) as width from pg_class where 
relname='bug';

  width
--
 394.130431739976

So we are going to massively underestimate the "size" of such a 
dataset. Now this appears to be a consequence of no "=" operator 
(std_typanalyze in analyze.c bails if there isn't one), so the planner 
has no idea about how wide 'val' actually is. I'm wondering if it is 
worth having at least an "=" operator to enable some minimal stats to 
be available for xml columns.




Adding a minimal = op (see attached) and an analyze results in:

test=# explain select val::text from bug;
  QUERY PLAN
---
 Seq Scan on bug  (cost=0.00..62632.08 rows=108 width=385)

which gives a much better indication of dataset size.

-- Add support for a minimal = operator for xml type.
CREATE FUNCTION xmleq(xml, xml)
RETURNS bool
AS  'texteq'
LANGUAGE INTERNAL IMMUTABLE STRICT;

CREATE OPERATOR = (
leftarg = xml,
rightarg = xml,
procedure = xmleq,
commutator = =
);

-- Add class so analyze populates pg_statistic.
CREATE OPERATOR CLASS xml_ops
	DEFAULT FOR TYPE xml USING hash AS
		OPERATOR1   =;

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Bad optimizer data for xml (WAS: xml data type implications of no =)

2010-06-03 Thread Mark Kirkwood

On 27/05/10 13:37, Mark Kirkwood wrote:

On 25/05/10 16:43, Mark Kirkwood wrote:
Today I ran into some interesting consequences of the xml data type 
being without an "=" operator. One I thought I'd post here because it 
has a *possible* planner impact. I'm not sure it is actually a bug as 
such, but this seemed the best forum to post in initially:


test=# \d bug
  Table "public.bug"
 Column |  Type   | Modifiers
+-+---
 id | integer |
 val| xml |

test=# explain select val::text from bug;
  QUERY PLAN
--
 Seq Scan on bug  (cost=0.00..58127.78 rows=1000278 width=32)


Note the width estimate. However a more realistic estimate for width is:

test=# select 8192/(reltuples/relpages) as width from pg_class where 
relname='bug';

  width
--
 394.130431739976

So we are going to massively underestimate the "size" of such a 
dataset. Now this appears to be a consequence of no "=" operator 
(std_typanalyze in analyze.c bails if there isn't one), so the 
planner has no idea about how wide 'val' actually is. I'm wondering 
if it is worth having at least an "=" operator to enable some minimal 
stats to be available for xml columns.




Adding a minimal = op (see attached) and an analyze results in:

test=# explain select val::text from bug;
  QUERY PLAN
---
 Seq Scan on bug  (cost=0.00..62632.08 rows=108 width=385)

which gives a much better indication of dataset size.


   


Maybe I gave this guy a bad title - is it a concern that the 'width' 
estimate is so far off for xml datatypes (because of no = op)? It seemed 
to me that this could result in some bad plan choices (e.g in subqueries 
etc).


regards

Mark



Re: [BUGS] Bad optimizer data for xml (WAS: xml data type implications of no =)

2010-06-08 Thread Mark Kirkwood

On 09/06/10 15:22, Robert Haas wrote:

On Thu, Jun 3, 2010 at 7:16 PM, Mark Kirkwood
  wrote:
   

Maybe I gave this guy a bad title - is it a concern that the 'width'
estimate is so far off for xml datatypes (because of no = op)? It seemed to
me that this could result in some bad plan choices (e.g in subqueries etc).
 

It's possible.  I don't really see a reason not to add an = operator
for XML - does anyone else?

It would need to be done by updating src/include/catalog/pg_*.h,
rather than via SQL, of course.

   


Heh, sure should - I merely included the SQL stuff in case anyone else 
wanted to reproduce what I was seeing!


Cheers

Mark

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Bad optimizer data for xml (WAS: xml data type implications of no =)

2010-06-08 Thread Mark Kirkwood

On 09/06/10 17:14, Tom Lane wrote:

Robert Haas  writes:
   

It's possible.  I don't really see a reason not to add an = operator
for XML - does anyone else?
 

Yes, that was considered and rejected, IIRC.  What is your definition
of equality for xml?
   


Yes - but in that previous discussion the optimizer (lack of) 
information was not considered (or known I suspect), so maybe a rethink 
is worthwhile?


It seems that the nub of this issue is that there are conceptually two 
types of =, one for datatype specific comparison, and one for optimizer 
statistical information calculation. However the system allows only the 
first, so if you don't (or can't) have one then you lose some possibly 
important optimization data.


regards

Mark

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Bad optimizer data for xml (WAS: xml data type implications of no =)

2010-06-09 Thread Mark Kirkwood

On 10/06/10 02:17, Tom Lane wrote:

Mark Kirkwood  writes:
   

It seems that the nub of this issue is that there are conceptually two
types of =, one for datatype specific comparison, and one for optimizer
statistical information calculation. However the system allows only the
first, so if you don't (or can't) have one then you lose some possibly
important optimization data.
 

Nonsense.  ANALYZE and the optimizer work with the datatype's usual
notion of '=', whatever it is.

   


Slow down the reading Tom... and read what I was actually saying - note 
the"conceptually". Of course the code uses the datatype's defined "=".



It's possible that we should install a simplified code path in analyze.c
that can collect width data for a column even in the absence of any '='
operator.
   


Yeah I was thinking along the same lines.


Do you have an actual example where such data would have affected a
plan choice?


   


Not at the moment, I was thinking that anywhere that used such datatypes 
in a subquery of similar might be a likely case. I guess I was looking 
at this as a case of "this is an area where we have less accurate 
optimizer data that we could have", and thinking of ways to improve it.


regards

Mark



--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] DETAIL: Failed on request of size 8

2010-07-22 Thread Mark Kirkwood

On 23/07/10 14:34, vamsi krishna wrote:

Hi

I am running a query on postgres and got the following error:

ERROR: out of memory
DETAIL:  Failed on request of size 8


   


Hmm - looks like your system needs more memory to complete the query 
(ahem - would help to see the query, plus EXPLAIN output thereof). To be 
of any more help, you need to supply more detail - see:


http://wiki.postgresql.org/wiki/Guide_to_reporting_problems




--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5595: Documentation is not installs from VPATH build.

2010-08-03 Thread Mark Kirkwood

On 04/08/10 03:35, Tom Lane wrote:

"Dmtiriy Igrishin"  writes:
   

Description:Documentation is not installs from VPATH build.
 
   

When 'configure' executed in a directory outside the source tree the
documentation is not installs later nevertheless the "gmake install-docs" or
"gmake install-world" (I tried to build with "gmake world" also) typed to
install PostgreSQL.
When 'configure' executed inside the source tree - all is okay.
 

Hmm ... works for me, on a Fedora 13 box.  Maybe a bug in your copy of
gmake?


   


Hmm, I'm on Ubuntu 10.04 - works for me only if I do the following in 
the src tree first:


$ cd pgsql-9.0beta3
$ ./configure
$ make maintainer-clean

... and then do a VPATH configure and build somewhere else. I'm guessing 
that if I used a checked out src tree instead of a downloaded one then 
it work work fine (suspect Tom used a checked out tree) Tom?


Cheers

Mark

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5595: Documentation is not installs from VPATH build.

2010-08-04 Thread Mark Kirkwood

On 04/08/10 16:55, Tom Lane wrote:


You're right, I did.  Perhaps the presence of prebuilt docs in the
source tree confuses something --- anybody wanna test?

   


The files that seem to be causing the confusion are:

/doc/src/sgml/html-stamp
/doc/src/sgm/man-stamp

A src tree 'maintainer-clean' removes then, but any lesser level of 
clean doesn't. Hmm - shouldn't a VPATH build look at its *own* 
doc/src/sgml/*-stamp files to see if it needs to build the docs? Note 
that it does *create* them in there after a successful build...


Cheers

Mark

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5650: Postgres service showing as stopped when in fact it is running

2010-09-09 Thread Mark Llewellyn

The following bug has been logged online:

Bug reference:  5650
Logged by:  Mark Llewellyn
Email address:  mark_llewel...@adp.com
PostgreSQL version: 9.0 RC1
Operating system:   Windows XP
Description:Postgres service showing as stopped when in fact it is
running
Details: 

The postgresql-9.0 service is showing as *not running* in pgAdmin 3 1.12 RC1
and as *stopped* in the Windows services list, although the database is
actually running and I can connect to it and create databases, tables etc.
It also shows multiple postgresql tasks in Windows task manager.
However, because Windows XP is not aware that the postgresql-9.0 service is
running it is not possible to stop the service.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5665: VPATH catalog/schemapg.h

2010-09-20 Thread Mark Kirkwood
That is not what is being said (or perhaps I should say not what is 
being meant)! Essentially you need to help us help you. Given that VPATH 
builds seem to work for the rest of us, you need to help us see what 
(possibly unusual) thing(s) you did that have got such a build confused.


The guys here like to fix stuff - but cannot fix your bug unless you 
help by supplying what has been asked for.


Best wishes

Mark

On 21/09/10 08:37, Graham Swallow wrote:

Noone else has missing files, in the wrong places,
Its not their problem.

All of the files on all of your machines, are in the right places.
Its not your problem.

All of the files on my machine, are (NOW) in the right places.
Its not my problem.

everybody's happy ;-)

   



--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] src/tools/fsync/test_fsync.c does not compile

2010-09-30 Thread Mark Kirkwood
The discussion on -performance about disk caching reminded me that the 
useful fsync test utility does not seem to compile (git master on Ubuntu 
10.04):


$ make
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing 
-fwrapv -g -I../../../src/interfaces/libpq -I../../../src/include 
-D_GNU_SOURCE -c -o test_fsync.o test_fsync.c

In file included from /usr/include/fcntl.h:205,
from ../../../src/include/access/xlogdefs.h:15,
from ../../../src/include/access/xlog.h:15,
from ../../../src/include/access/xlog_internal.h:19,
from test_fsync.c:11:
In function ‘open’,
inlined from ‘main’ at test_fsync.c:66:
/usr/include/bits/fcntl2.h:45: error: call to ‘__open_too_many_args’ 
declared with attribute error: open can be called either with 2 or 3 
arguments, not more

make: *** [test_fsync.o] Error 1

The tiny change (attached) seems it fix it for me.

regards

Mark
diff --git a/src/tools/fsync/test_fsync.c b/src/tools/fsync/test_fsync.c
index 3c9c6b6..28c2119 100644
--- a/src/tools/fsync/test_fsync.c
+++ b/src/tools/fsync/test_fsync.c
@@ -63,7 +63,7 @@ main(int argc, char *argv[])
 	for (i = 0; i < XLOG_SEG_SIZE; i++)
 		full_buf[i] = random();
 
-	if ((tmpfile = open(filename, O_RDWR | O_CREAT, S_IRUSR | S_IWUSR, 0)) == -1)
+	if ((tmpfile = open(filename, O_RDWR | O_CREAT, S_IRUSR | S_IWUSR)) == -1)
 		die("Cannot open output file.");
 	if (write(tmpfile, full_buf, XLOG_SEG_SIZE) != XLOG_SEG_SIZE)
 		die("write failed");

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Btree index left link changed unexpectedly after bringing up 8.3.11 warm standby

2010-10-27 Thread Mark Kirkwood
I'm seeing this on a Pitrtools managed warm standby box that we 
periodically bring the db fully up on in order to test if the standby is 
good.


After the standby is up, then a db wide VACUUM produces:

2010-10-28 17:20:51 NZDT WARNING:  relation "node" page 248500 is 
uninitialized --- fixing
2010-10-28 17:20:51 NZDT WARNING:  relation "node" page 248502 is 
uninitialized --- fixing
2010-10-28 17:20:51 NZDT WARNING:  relation "node" page 248504 is 
uninitialized --- fixing
2010-10-28 17:20:54 NZDT ERROR:  left link changed unexpectedly in block 
21042 of index "content_node_node_type_id_inserted_idx"


I'm guessing the index error is due to the uninitialized table pages 
(the index "content_node_node_type_id_inserted_idx" is on the "node" 
table). Are the uninitialized pages cause for concern, or merely an 
artifact of log shipping?


The is 8.3.11 on Debian Lenny x86-64.

Thanks

Mark

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Btree index left link changed unexpectedly after bringing up 8.3.11 warm standby

2010-10-28 Thread Mark Kirkwood

On 29/10/10 04:32, Alvaro Herrera wrote:

Excerpts from Mark Kirkwood's message of jue oct 28 02:20:56 -0300 2010:
   

I'm guessing the index error is due to the uninitialized table pages
(the index "content_node_node_type_id_inserted_idx" is on the "node"
table).
 

Not necessarily ... You still have the index in that state, right?
We could try some diagnostics on it.

   



Doing some more digging - there was an out of memory incident on the 
master the previous day, so I guess both of these observations could 
well be caused by leftover partially completed operations.


Mark

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Btree index left link changed unexpectedly after bringing up 8.3.11 warm standby

2010-10-28 Thread Mark Kirkwood

On 29/10/10 10:27, Tom Lane wrote:


Were there similar warnings on the master?  Uninitialized-page warnings
are expected in certain error-recovery scenarios, but I'd be a little
worried if the slave appeared to be out of sync with the master.


   


I don't see any in the logs - however the relevant table may not have 
been vacuum recently enough to spit out the warning. I do see lots of these:


2010-09-30 17:31:20 NZDT ERROR:  could not open relation with OID 1836671
2010-09-30 17:31:20 NZDT ERROR:  could not open relation with OID 1836671

and also the following has recently started appearing on the console:

kernel: [ 7341.689322] ECC/ChipKill ECC error.

So... lots of brokeness to examine here (have promoted one of our slaves 
to be the new master). I'll see if we still have the old master db 
around, it would be interesting to see what happens if we start it up 
and try a VACUUM - however the dbas may have set the box up as a slave 
again before we noticed the memory errors (so possibly deleted the old 
master).


Cheers

Mark



--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #5734: autovacuum_enabled input should be validated, standardized.

2010-10-30 Thread Mark Stosberg

The following bug has been logged online:

Bug reference:  5734
Logged by:  Mark Stosberg
Email address:  m...@summersault.com
PostgreSQL version: 9.0.1
Operating system:   FreeBSD
Description:autovacuum_enabled input should be validated,
standardized.
Details: 

The "autovacuum_enabled" storage parameter claims to be a boolean type:
http://www.postgresql.org/docs/9.0/static/sql-createtable.html#SQL-CREATETAB
LE-STORAGE-PARAMETERS
... but it fails to behave a normal boolean.

Normally, you could set a boolean with a value of false, 'off' or 'f',
but you would always get back a value of 'f'  With this value, there is
no translation.

I think this kind of boolean should be handled like a standard
PostgreSQL boolean.

I noticed because the Slony code base has a hardcoded check for
"autovacuum_enabled=off", when a false value could also be stored as
autovacuum_enabled=f

We should be able to rely on this value being always returned as
"autovacuum_enabled='f'" just a normal boolean would.

Mark

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #5862: Postgres dumps core upon a connection attempt

2011-02-08 Thread Mark Kirkwood

On 04/02/11 15:11, Craig Ringer wrote:

On 02/03/2011 11:15 PM, Matt Zinicola wrote:


I re-compiled with '--enable-debug' and got the symbols.  The 
pastebin is at

http://pastebin.com/xMhEHFdT


That's really interesting. It's getting a NULL path pointer when - I 
think - it tries to determine the location of the executables.


Presumably this is something bizarre in your environment - but I have 
no idea what it might be. Maybe someone else reading will have an idea.




(Coming in too late, but...)

I'd be interested to see what happens if you do:

$ export PATH=/usr/local/pgsql/bin:$PATH
$ export LD_LIBRARY_PATH=/usr/local/pgsql/lib
$ initdb -D /data/postgres
$ pg_ctl -D /data/postgres start;
$ psql

I'm guessing that there are older libraries or binaries earlier in your 
various env paths, and these are tripping up postgres.


Cheers

Mark



--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] Hung Vacuum in 8.3

2011-02-21 Thread Mark Kirkwood

This is 8.3.14 on Debian Lenny x86-64.

I'm seeing a hung vacuum:

postgres=# select procpid, query_start,waiting, current_query from 
pg_stat_activity where current_query like '%VACUUM%';
procpid |  query_start  | waiting 
|
current_query
+---+-+-

7347| 2011-02-22 06:02:02.400247+13 | f   | VACUUM ANALYZE;

at approx 1300 (so it has been sitting there for approx 7 hours, normal 
database vacuum time is 10 minutes). Now according to pg_stat_activity 
and pg_locks it is *not* waiting for a lock, but no vacuuming appears to 
be going on. strace says:


$ strace -p 7347

Process 7347 attached - interrupt to quit
semop(33456157, 0x7512bad0, 1

Ok, so we are waiting on a semaphore - hmm, why is it not showing up as 
waiting on a lock of some kind?


Of interest is this:

postgres=# select procpid, query_start, current_query from 
pg_stat_activity order by query_start limit 1;
 procpid |  query_start  
|   current_query

-+---+-
   25953 | 2011-02-22 04:24:07.417138+13 | SELECT n.node_id, n.node_ -- 
long query, joining several large tables - text snipped


So this guy had been running from before the vacuum started, so probably 
vacuum is wanting to do lazy_truncate_heap() on one of the tables in the 
join (no gdb on this box unfortunately). I am however still puzzled 
about why no locks are being waited on.


I have canceled the vacuum, but any suggestions for getting more diag 
info for next time?


regards

Mark


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Hung Vacuum in 8.3

2011-02-21 Thread Mark Kirkwood

On 22/02/11 19:47, Heikki Linnakangas wrote:


A long query on the same table can block vacuum. Vacuum needs to take 
a so-called "cleanup lock" on each page, which means that it has to 
wait until no other backend holds a pin on the page. A long-running 
query can keep a page pinned for a long time.




Ah, - this is LockBufferForCleanup? So we are waiting for an exclusive 
LockBuffer operation i.e a LWLock not a (heavyweight) lock, urg... no 
wonder.


To mitigate this I'm encouraging the use of a cleanup script for long 
running queries - since the app concerned is a web site, there is no 
call for queries that run this long (i.e way longer than the timeout for 
the respective page rendering).


Thanks for the clarification (assuming I've understood correctly of 
course...).


Cheers

Mark

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


  1   2   >