[BUGS] BUG #2000: psql does not prompt for password

2005-10-26 Thread Todd

The following bug has been logged online:

Bug reference:  2000
Logged by:  Todd
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1 Beta 4
Operating system:   Windows Xp home
Description:psql does not prompt for password
Details: 

psql -U postgres does not prompt for password and responds with...

psql: FATAL: password authentication failed for user "postgres".

even when I pass the -W option it doesn't prompt for password and gives me
the error above.  I deleted and recreated my cluster and get the same
result.  I can connect to the database using PgAdmin as user postgres.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[BUGS] BUG #3291: Query tool not returning all results

2007-05-18 Thread Todd Frankson

The following bug has been logged online:

Bug reference:  3291
Logged by:  Todd Frankson
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8.1
Operating system:   Windows 2003 Server
Description:Query tool not returning all results
Details: 

When selecting From a Text field that has 4096 characters in it, the query
tool only returns a few hundred characters, and exports maybe a hundred more
characters than the result set.

I have already set the options to return 5000 characters in the
Otions-->Query-->max. Characters per column settings.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [BUGS] db growing out of proportion

2003-05-31 Thread Todd Nemanich
I have a database with similar performance constraints. Our best 
estimates put the turnover on our most active table at 350k tuples/day. 
The hardware is a 4x1.4GHz Xeon w/ a RAID 1 disk setup, and the DB 
floats around 500MB of disk space taken. Here is what we do to maintain 
operations:

1) Cron job @ 4:00AM that runs a full vacuum analyze on the DB, and 
reindex on the major tables. (Reindex is to maintain index files in SHM) 
An alerting feature pages the administrator if the job does not complete 
within a reasonable amount of time.

2) Every 15 minutes, a cron job runs a vacuum analyze on our five 
largest tables. An alert is emailed to the administrator if a second 
vacuum attempts to start before the previous completes.

3) Every week, we review the disk usage numbers from daily peaks. This 
determines if we need to increase our shmmax & shared buffers.

Additionally, you may want to take a look at your query performance. Are 
most of your queries doing sequential scans? In my system, the crucial 
columns of the primary tables are int8 and float8 fields. I have those 
indexed, and I get a serious performance boost by making sure all 
SELECT/UPDATE/DELETE queries that use those columns in the WHERE have an 
explicit ::int8 or ::float8 (Explain analyze is your friend). During 
peak usage, there is an order of magnitude difference (usually 10 to 
15x) between queries doing sequential scans on the table, and queries 
doing index scans. Might be worth investigating if your queries are 
taking 5 seconds when your DB is fresh. HTH.



Tomas Szepe wrote:
Hello everybody,

I'm facing a simple yet gravely problem with postgresql 7.3.2 on x86 Linux.
My db is used to store IP accounting statistics for about 30 C's.  There are
a couple truly trivial tables such as the one below:
CREATE TABLE stats_min
(
ip  inetNOT NULL,
start   timestamp   NOT NULL default CURRENT_TIMESTAMP(0),
intlen  int4NOT NULL default 60,
d_inint8NOT NULL,
d_out   int8NOT NULL,
constraint "stats_min_pkey" PRIMARY KEY ("ip", "start")
);
CREATE INDEX stats_min_start ON stats_min (start);
A typical transaction committed on these tables looks like this:

BEGIN WORK
DELETE ...
UPDATE/INSERT ...
COMMIT WORK
Trouble is, as the rows in the tables get deleted/inserted/updated
(the frequency being a couple thousand rows per minute), the database
is growing out of proportion in size.  After about a week, I have
to redump the db by hand so as to get query times back to sensible
figures.  A transaction that takes ~50 seconds before the redump will
then complete in under 5 seconds (the corresponding data/base/ dir having
shrunk from ~2 GB to ~0.6GB).
A nightly VACCUM ANALYZE is no use.

A VACUUM FULL is no use.

A VACUUM FULL followed by REINDEX is no use.

It seems that only a full redump involving "pg_dump olddb | \
psql newdb" is capable of restoring the system to its working
glory.
Please accept my apologies if I've overlooked a relevant piece of
information in the docs.  I'm in an urgent need of getting this
problem resolved.


---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org


[BUGS] BUG #1412: binaries are linked to numerous extraneous shared

2005-01-20 Thread Todd Eigenschink

The following bug has been logged online:

Bug reference:  1412
Logged by:  Todd Eigenschink
Email address:  [EMAIL PROTECTED]
PostgreSQL version: 8 (any recent)
Operating system:   Linux
Description:binaries are linked to numerous extraneous shared
Details: 

For a long time, Postgres binaries have been linked to numerous extraneous
shared libraries.  Since the same command line is used for all tools, it's
no surprise.

I asked for that to be cleaned up a long time ago and offered a tool to do
it, but nobody ever took me up on the suggestion.

I wrote a tool that works like this:

./configure --prefix=$PREFIX
make
make install
relink-postgres $PREFIX
make install

The relink rebuilds the binaries in the source tree based on what it finds
in $PREFIX/bin.  It just tries to remove shared libraries and relink until
it gets down to the minimal set of libs that will permit the link to
succeed.

This will probably be mangled.  I'll be glad to mail it on request.  Sample
output:

Relinking src/bin/psql/psql
Successfully removed: -lz -lcrypt -lresolv -lnsl -ldl -lm

Relinking src/bin/scripts/vacuumdb
Successfully removed: -lz -lreadline -lncurses -lcrypt -lresolv -lnsl -ldl
-lm



#!/bin/sh

minlibs_script=/tmp/minlibs.pl.$$
dir=$1

##

cat <<'EOF' > $minlibs_script
$| = 1;

my @cmd = @ARGV;

print "Successfully removed:";

while (1)
{
my @before_cmd = @cmd;

for (my $i = 1; $i < @cmd; $i++)
{
next unless $cmd[$i] =~ /^-l/;

my @tmp = @cmd;
splice @tmp, $i, 1;

system(join(' ', @tmp, '>/dev/null', '2>&1'));

if ($? == 0)
{
# Success.
print " $cmd[$i]";
@cmd = @tmp;
last;
}
}

# Bail if no changes were made in this pass.
last if @cmd == @before_cmd;
}

if (@cmd == @ARGV)
{
print "nothing.\n";
}
else
{
print "\n";
}

# Execute it one last time to recreate whatever binary we might have
killed.
system(@cmd);
EOF

##

for binary in $dir/bin/*; do

ldd $binary 2>&1 | grep -q 'not a dynamic executable' && continue

file=`basename $binary`
[ "$file" = "postmaster" ] && continue

path=`find src -type f -name $file`
bindir=`dirname $path`

echo "Relinking $bindir/$file"
if [ "$bindir" = "" ]; then
echo "No directory!"
exit 1
fi

rm $path
gcccmd=`make 2>&1 | grep -- "-o $file"`
#echo "gcc cmd = " $gcccmd

(cd $bindir && perl $minlibs_script $gcccmd)

echo

done

rm $minlibs_script

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[BUGS] canceling statement due to user request

2010-06-02 Thread Spangler, Todd
Dear PostgreSQL,

I have been working with a web application that allows saving of reports
to a text file using PostgreSQL.  The application has worked fine for
about 6 months and now we are getting the following 2 error messages:

 

canceling statement due to statement timeout

canceling statement due to user request

 

In development, I seem to have been able to keep the timeout messagebox
from showing up by adding a timeout to the connectionstring, but the
other error I cannot seem to fix.

 

I have read that this may have something to do with the Autovacuum
feature.  We are using PostgreSQL version 8.3.  I'm testing with 8.4 on
another server and still am getting the messages above.  It seems the
errors do not happen every time.  If this is the Autovacuum feature, is
there a way that I can disable this feature and then re-enable it when I
am done with the creation of my report?  Also, when we receive these
errors, it does not save any information to the text file like it
normally would without the error message, so we do not get the report we
need when these errors occur.  Another thought would be for us to allow
the Autovacuum to be turned on only at certain times.  That would be
fine for our situation.

 

I have read that these messages are by design and I am fine with that,
but I need an easy to use workaround that will allow the reports to
work.  These reports are very important to the company. 

 

We are using Windows Servers and Windows XP, C#.Net 2008  - ASP.Net, and
PostgreSQL 8.3.

 

Thank you for your time,

 

Todd Spangler

COMSYS - A Manpower Company
804-521-5288

 



[BUGS] renaming+recreating table w/default sequence causes dependency seq issue

2012-08-07 Thread Todd Kover

I saw issues around renaming tables and not renaming sequences in the TODO
list but did not see anything about this.  Apologies if I missed it.

This is with a 9.1.4 server (enterprisedb download on mac os/x lion) and also
seen on 9.1.3 built from netbsd pkgsrc.

It appears that something is amiss if you try to drop a table that has been
renamed that used to have a default mapping to a sequence:

Given this:
--
drop table IF EXISTS foo;
drop table IF EXISTS foo_v26;

create table foo (id serial not null, bar integer );
alter table foo alter column id drop default;

alter table foo rename to foo_v26;

create table foo (id integer not null, bar integer );

alter table foo alter id SET DEFAULT nextval('foo_id_seq');

drop table foo_v26;
--

everthing works as expected until the final drop, which says:

jazzhands=> drop table foo_v26;
ERROR:  cannot drop table foo_v26 because other objects depend on it
DETAIL:  default for table foo column id depends on sequence foo_id_seq
HINT:  Use DROP ... CASCADE to drop the dependent objects too.

however...

jazzhands=> \d foo;
 Table "public.foo"
 Column |  Type   |Modifiers 
+-+--
 id | integer | not null default nextval('foo_id_seq'::regclass)

jazzhands=> \d foo_v26;
Table "public.foo_v26"
 Column |  Type   | Modifiers 
+-+---
 id | integer | not null

Interestingly, I can drop table foo without any complaints.

It seems like the dependency did not move (it also seems like its
backwards but that's probably all me).

Sadly, if I move setting the default to after I drop the old table, the
sequence goes away, so I am still digging into a work around.

thanks,
-Todd

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] renaming+recreating table w/default sequence causes dependency seq issue

2012-08-08 Thread Todd Kover

 > I don't see any bug there.  The ALTER DROP DEFAULT command does not
 > remove the sequence's dependence on foo.id; nor for that matter
 > does ALTER SET DEFAULT create an auto-drop dependency on the new
 > table.  See ALTER SEQUENCE OWNED BY if you want to dissociate a
 > serial column's sequence from the column, or reattach it to another
 > column.
 >
 > Formally speaking, a "serial column" is shorthand for creating an
 > integer (or bigint) column, creating a sequence, associating them as
 > though by ALTER SEQUENCE OWNED BY, and setting the column's default
 > to nextval('sequence-name').

I stand corrected.  It would be nice if \ds or \ds+ or something showed
this relationship so it was evident.  That's more of a feature request
than a bug fix tho.

apologies for the misdirection.

-Todd

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #4377: casting result of timeofday() to timestamp fails in some timezones

2008-08-27 Thread Todd A. Cook

Tom Lane wrote:


In the meantime, why aren't you just using clock_timestamp()?
timeofday() is deprecated.


I am using clock_timestamp() now.  The use of timeofday() was from the era
when 8.0 was hot stuff.

BTW, the word "deprecated" does not appear on the docs page where timeofday()
is listed (http://www.postgresql.org/docs/8.3/static/functions-datetime.html),
and there doesn't seem to be anything in the context of the 3 occurrences of
"timeofday" that would imply it is deprecated.

-- todd


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] REQ: build src/backend/postgres w/o -lncurses or -lreadline

2001-05-11 Thread Todd R. Eigenschink

When Postgres is configured and decides to use libncurses and
libreadline, the backend gets linked against those two libs, too, even
though it really doesn't use them.  This is just extra wasted size and
(if they're shared libs) dependencies you don't need.

I made this ultra-cheesy change to src/backend/Makefile to take care
of that.

postgres: $(OBJS)
$(CC) $(CFLAGS) $(LDFLAGS) $(export_dynamic) $^ $(LIBS) -o $@

to

postgres: $(OBJS)
$(CC) $(CFLAGS) $(LDFLAGS) $(export_dynamic) $^ `echo $(LIBS) | sed -e 's/ 
-lncurses//' -e 's/ -lreadline//'` -o $@



I realize this is a sort of special-purpose request.  It's not a big
deal to build the whole thing, then just relink src/backend/postgres
without those libs.  It just feels dirty to have them there when they
aren't needed.


Todd


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [BUGS] REQ: build src/backend/postgres w/o -lncurses or -lreadline

2001-05-14 Thread Todd R. Eigenschink

Bruce Momjian writes:
>> I realize this is a sort of special-purpose request.  It's not a big
>> deal to build the whole thing, then just relink src/backend/postgres
>> without those libs.  It just feels dirty to have them there when they
>> aren't needed.
>> 
>
>It is my understanding that having them there causes no bloat in the
>binary.  No symbols are resolved in those libs.

Perhaps not, but the postgres binary then becomes dependent on those
libs being available (if they're shared).


Todd


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [BUGS] REQ: build src/backend/postgres w/o -lncurses or -lreadline

2001-05-16 Thread Todd R. Eigenschink

Peter Eisentraut writes:
>> I don't see much value in fixing this just for libreadline (and even
>> less in fixing it by disabling readline support entirely).  I think I
>> recall Peter E. having mentioned that there'd be a better answer in
>> autoconf 2.50 ... so I'm inclined not to touch the problem until that
>> comes out.
>
>I don't see how autoconf 2.50 ("any day now") would change anything here.
>We could make up new variables like READLINE_LIBS, but where do we stop?
>I don't have a good answer.

Short of a separate configure script, there probably isn't one.

If you feel like doing *something*, my patch to "sed" out the
libraries would be really simple.  Otherwise, it's not a big deal for
me to just rebuild src/backend/postgres with a slightly altered
command line.  Once the initial "make" finishes, it only takes about
15 seconds.

And of course, most people don't/won't care or notice.  The extra
unneeded dependencies just make me feel dirty. :-)


Todd


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[BUGS] Mac OS X 10.1 build bug - paths with spaces

2001-11-19 Thread Murray Todd Williams

Operating System: Mac OS X 10.1.1 (Build 5M28)
PostgreSQL Version: CVS checkout as of Nov 17, 2001
Problem Category: Building

After running ./configure and then "make all" the build stopped with the 
message

cc -traditional-cpp -g -O2 -Wall -Wmissing-prototypes 
-Wmissing-declarations -I../../../src/include   -c -o analyze.o analyze.c
analyze.c:23: parser/parse.h: No such file or directory

I looked at the part of the build messages where parse.h is supposed to 
get built and I didn't see any problems:

make -C parser parse.h
bison -y -d  gram.y
mv -f y.tab.c ./gram.c
mv -f y.tab.h ./parse.h
prereqdir=`cd parser/ >/dev/null && pwd` && \
   cd ../../src/include/parser/ && rm -f parse.h && \
   ln -s $prereqdir/parse.h .

with no error messages.

The base directory of my build was /Volumes/SCSI Volume/devel/pgsql. I 
had a wild hunch that the space in the "SCSI Volume" part of my path was 
to blame, so I unpacked a clean copy of the source material to 
/usr/local/src/pgsql (no spaces in path name), redid the ./configure and 
make phase and all built fine.

Obviously this is hampering my development, but I figured I'd pass it 
along since you never know who's going to get hit unawares with this. 
(Especially those pesky Mac people who like spaces in their folder 
names.)

Cheers!

Murray Todd Williams


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [BUGS] BUG #8163: simultaneous nearly identical update queries execute extremely slowly

2013-05-15 Thread Todd A. Cook

On 05/15/13 13:27, tc...@blackducksoftware.com wrote:

The following bug has been logged on the website:

Bug reference:  8163
Logged by:  Todd Cook
Email address:  tc...@blackducksoftware.com
PostgreSQL version: 8.4.16
Operating system:   Fedora 14
Description:

When nearly identical update queries arrive simultaneously, the first one to
execute runs normally, but subsequent executions run _extremely_ slowly.
We've seen this behaviour in production, and the contrived test case below
reproduces the issue.


I've repeated the test below on a 9.1.9 installation, and it works fine there.
Each update finished in about 7 seconds.

-- todd



test=> select version() ;
   version


  PostgreSQL 8.4.16 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
4.4.4 20100726 (Red Hat 4.4.4-13), 64-bit


To set up:

create table prof as select i as id, i::text col1, (i*2)::text col2 ,
(i*3)::text col3, i*2 col4, md5((i % 5999)::text) as hash, (i % 6000)::text
as hint, (i*4)::text col6, i*5 col7, i*6 col8 from
generate_series(1,3600) i ;
create table tree as select 'fixed16charstrng'::text as changeme, md5((i %
20)::text) as hash from generate_series(1,40) i ;
create index tree_hash_idx on tree(hash) ;

The problematic query run in isolation:

explain analyze update tree set changeme = 'changed' where hash in (select
hash from prof where hint = '2500') ;
   QUERY PLAN

--
  Nested Loop  (cost=198.75..1000104.44 rows=11583 width=39) (actual
time=6765.316..6871.167 rows=11998 loops=1)
->  HashAggregate  (cost=198.75..198.76 rows=1 width=33) (actual
time=6765.264..6768.259 rows=5999 loops=1)
  ->  Seq Scan on prof  (cost=0.00..184.15 rows=5840 width=33)
(actual time=1.351..6755.691 rows=6000 loops=1)
Filter: (hint = '2500'::text)
->  Index Scan using tree_hash_idx on tree  (cost=0.00..5.65 rows=2
width=39) (actual time=0.014..0.016 rows=2 loops=5999)
  Index Cond: (tree.hash = prof.hash)
  Total runtime: 7132.700 ms
(7 rows)

To exercise the problem (assuming a database named "test"):
psql -c "update tree set changeme = 'changed' where hash in (select hash
from prof where hint = '2500')" test &
psql -c "update tree set changeme = 'changed' where hash in (select hash
from prof where hint = '2501')" test &
psql -c "update tree set changeme = 'changed' where hash in (select hash
from prof where hint = '2502')" test &
psql -c "update tree set changeme = 'changed' where hash in (select hash
from prof where hint = '2503')" test &
psql -c "update tree set changeme = 'changed' where hash in (select hash
from prof where hint = '2504')" test &
psql -c "update tree set changeme = 'changed' where hash in (select hash
from prof where hint = '2505')" test &
psql -c "update tree set changeme = 'changed' where hash in (select hash
from prof where hint = '2506')" test &

One of the update begins executing immediately, while the others block
waiting on the first (which is expected). The first update finished in under
10 seconds, and another one started executing; however, this second one has
now been executing for 2 hours.

strace output from that backend is almost exclusively reads, with only a few
calls to lseek.  Attaching with gdb and interrupting a few times mostly gave
this backtrace:

#0  0x003b812d3490 in __read_nocancel () from /lib64/libc.so.6
#1  0x005cd0cd in FileRead ()
#2  0x005dc55d in mdread ()
#3  0x005ca315 in ReadBuffer_common ()
#4  0x005cac7f in ReadBufferExtended ()
#5  0x00460c8b in heapgetpage ()
#6  0x0046110a in heapgettup_pagemode ()
#7  0x00461b56 in heap_getnext ()
#8  0x0054ef18 in SeqNext ()
#9  0x005429ba in ExecScan ()
#10 0x0053b8a8 in ExecProcNode ()
#11 0x00547ac8 in ExecAgg ()
#12 0x0053b7b8 in ExecProcNode ()
#13 0x0054e031 in ExecNestLoop ()
#14 0x0053b818 in ExecProcNode ()
#15 0x0053827e in EvalPlanQualNext ()
#16 0x0053867b in EvalPlanQual ()
#17 0x00539afd in standard_ExecutorRun ()
#18 0x7f796347881b in pgss_ExecutorRun (queryDesc=0x1af53b0,
direction=ForwardScanDirection, count=0) at pg_stat_statements.c:516
#19 0x005e3ad1 in ProcessQuery ()
#20 0x005e3cd4 in PortalRunMulti ()
#21 0x005e4452 in Port

Re: [BUGS] BUG #8163: simultaneous nearly identical update queries execute extremely slowly

2013-05-15 Thread Todd A. Cook

On 05/15/13 16:10, Tom Lane wrote:

"Todd A. Cook"  writes:

On 05/15/13 13:27, tc...@blackducksoftware.com wrote:

When nearly identical update queries arrive simultaneously, the first one to
execute runs normally, but subsequent executions run _extremely_ slowly.
We've seen this behaviour in production, and the contrived test case below
reproduces the issue.



I've repeated the test below on a 9.1.9 installation, and it works fine there.


Given the reference to EvalPlanQual in your stack trace, I'm thinking
the explanation is this 9.0 fix:


Thanks for the explanation.  Is there any chance of that fix being backpatched
into 8.4?

-- todd



Author: Tom Lane 
Branch: master Release: REL9_0_BR [9f2ee8f28] 2009-10-26 02:26:45 +

 Re-implement EvalPlanQual processing to improve its performance and 
eliminate
 a lot of strange behaviors that occurred in join cases.  We now identify 
the
 "current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
 UPDATE/SHARE queries.  If an EvalPlanQual recheck is necessary, we jam the
 appropriate row into each scan node in the rechecking plan, forcing it to 
emit
 only that one row.  The former behavior could rescan the whole of each 
joined
 relation for each recheck, which was terrible for performance, and what's 
much
 worse could result in duplicated output tuples.

 Also, the original implementation of EvalPlanQual could not re-use the 
recheck
 execution tree --- it had to go through a full executor init and shutdown 
for
 every row to be tested.  To avoid this overhead, I've associated a special
 runtime Param with each LockRows or ModifyTable plan node, and arranged to
 make every scan node below such a node depend on that Param.  Thus, by
 signaling a change in that Param, the EPQ machinery can just rescan the
 already-built test plan.

 This patch also adds a prohibition on set-returning functions in the
 targetlist of SELECT FOR UPDATE/SHARE.  This is needed to avoid the
 duplicate-output-tuple problem.  It seems fairly reasonable since the
 other restrictions on SELECT FOR UPDATE are meant to ensure that there
 is a unique correspondence between source tuples and result tuples,
 which an output SRF destroys as much as anything else does.

regards, tom lane






--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs