[BUGS] bigint and indexes

2002-10-04 Thread Bill Schneider

Hello,

I've also come across the bigint/index bug where an index on
table(bigint_col) is ignored for

explain select * from table where bigint_col = 83

results in a "Seq Scan" while

explain select * from table where bigint_col = '83'
and explain select * from table where bigint_col = 83::bigint

both result in an "Index Scan."

This seems to be a well-known and documented issue.  Are there already plans
to fix this in an upcoming release?

Is the problem here in the optimizer itself, or in the parser?  I'm not an
expert on postgresql-internals, so I don't understand why type resolution
for numeric literals happens at a different stage or uses a different
process from type resolution of quoted string literals.  In other words, why
should quoted '83' be converted to a bigint or smallint or whatever to match
the other operand, while unquoted 83 is hard-converted to an int4 and never
gets promoted/reduced to match?   The behavior would be more intuitive if
quoted '83' *didn't* match the index type and resulted in a Seq Scan just
like the unquoted number.

-- Bill


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[BUGS] 7.3B2 initdb fails

2002-10-04 Thread Nic Ferrier

I built 7.3B2 from CVS and ran initdb: it fails. Here's an example of
the console session:

demo >>>
$ initdb -D /usr/local/pgdata

The files belonging to this database system will be owned by user "nferrier".
This user must also own the server process.

The database cluster will be initialized with locale C.

creating directory /usr/local/pgdatatest... ok
creating directory /usr/local/pgdatatest/base... ok
creating directory /usr/local/pgdatatest/global... ok
creating directory /usr/local/pgdatatest/pg_xlog... ok
creating directory /usr/local/pgdatatest/pg_clog... ok
creating template1 database in /usr/local/pgdatatest/base/1... 
initdb failed.
Removing /usr/local/pgdatatest.

<<< end of console messages.


Running initdb with debugging turned on generates too big a trace,
however I think these are the relevant bits:




Running with debug mode on.

initdb variables:
  PGDATA=/usr/local/pgdatatest
  datadir=/usr/local/share/postgresql
  PGPATH=/usr/local/bin
  ENCODING=
  ENCODINGID=0
  POSTGRES_SUPERUSERNAME=nferrier
  POSTGRES_BKI=/usr/local/share/postgresql/postgres.bki
  POSTGRES_DESCR=/usr/local/share/postgresql/postgres.description
  POSTGRESQL_CONF_SAMPLE=/usr/local/share/postgresql/postgresql.conf.sample
  PG_HBA_SAMPLE=/usr/local/share/postgresql/pg_hba.conf.sample
  PG_IDENT_SAMPLE=/usr/local/share/postgresql/pg_ident.conf.sample
The files belonging to this database system will be owned by user "nferrier".
This user must also own the server process.

The database cluster will be initialized with locale C.

creating directory /usr/local/pgdatatest... ok
creating directory /usr/local/pgdatatest/base... ok
creating directory /usr/local/pgdatatest/global... ok
creating directory /usr/local/pgdatatest/pg_xlog... ok
creating directory /usr/local/pgdatatest/pg_clog... ok
creating template1 database in /usr/local/pgdatatest/base/1... DEBUG:  invoking 
IpcMemoryCreate(size=1179648)
LOG:  database system was shut down at 2002-10-02 23:12:53 BST
LOG:  checkpoint record is at 0/10
LOG:  redo record is at 0/10; undo record is at 0/10; shutdown TRUE
LOG:  next transaction id: 3; next oid: 16384
LOG:  database system is ready
DEBUG:  start transaction
DEBUG:  creating bootstrap relation pg_proc...
DEBUG:  column proname name
DEBUG:  column pronamespace oid
DEBUG:  column proowner int4
DEBUG:  column prolang oid
DEBUG:  column proisagg bool
DEBUG:  column prosecdef bool
DEBUG:  column proisstrict bool
DEBUG:  column proretset bool
DEBUG:  column provolatile char
DEBUG:  column pronargs int2
DEBUG:  column prorettype oid
DEBUG:  column proargtypes oidvector
DEBUG:  column prosrc text
DEBUG:  column probin bytea
DEBUG:  column proacl _aclitem
DEBUG:  commit transaction
DEBUG:  start transaction
.
.
.
DEBUG:  start transaction
DEBUG:  bootstrap relation created
DEBUG:  commit transaction
DEBUG:  start transaction
DEBUG:  inserting row with oid 1247...
DEBUG:  inserting column 0 value 'pg_type'
DEBUG:  Typ != NULL
DEBUG:   -> pg_type
DEBUG:  inserted
DEBUG:  inserting column 1 value '11'
DEBUG:  Typ != NULL
DEBUG:   -> 11
DEBUG:  inserted
DEBUG:  inserting column 2 value '71'
DEBUG:  Typ != NULL
DEBUG:   -> 71
DEBUG:  inserted
DEBUG:  inserting column 3 value '1'
DEBUG:  Typ != NULL
DEBUG:   -> 1
DEBUG:  inserted
DEBUG:  inserting column 4 value '0'
DEBUG:  Typ != NULL
DEBUG:   -> 0
DEBUG:  inserted
DEBUG:  inserting column 5 value '1247'
DEBUG:  Typ != NULL
DEBUG:   -> 1247
DEBUG:  inserted
DEBUG:  inserting column 6 value '0'
DEBUG:  Typ != NULL
DEBUG:   -> 0
DEBUG:  inserted
DEBUG:  inserting column 7 value '0'
DEBUG:  Typ != NULL
ERROR:  Bad float4 input format -- underflow
DEBUG:  proc_exit(1)
DEBUG:  shmem_exit(1)
DEBUG:  exit(1)

initdb failed.
Removing /usr/local/pgdatatest.
<<< end of debug.


Here's the standard information:
I'm using Debian GNU-Linux for PPC on an iMac. 7.2 builds and
initdb's ok.


Nic


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[BUGS] Passing character data to C function

2002-10-04 Thread Philip Compton

After days of researching this problem all I find is that knowbody has done 
this successfully and documented it for me to find. I cannot determine if it 
is a bug or my incorrect development structure to blame.

I would like to pass character data from postgreSQL 7.1.3 user function into a 
C program.

This is my setup:

CREATE FUNCTION pc_fn(char(50), char(50), char(255)) RETURNS Boolean 
AS '/u/students/pcompton/comp442/ass4/pc_fn.so' LANGUAGE 'C'; 

//C function prototype
int pc_fn(char* a1,char* a2,char* a3){...}
or
int pc_fn(char a1[50],char a2[50],char a3[255]){...}

fails to work. The function _is_ called and I can _log output_ as it executes, 
but no character data shows up in the arguments.

Thank you.
Phil Compton

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[BUGS]

2002-10-04 Thread Venkatesh Phadnavis



THis is with reference to the earlier email about error during compiling
postgresql, I used the following configuration parameters.

Additional information

Thanks again,

-V



#!/bin/bash

./configure --enable-recode --enable-multibyte --with-CXX --with-perl --enable-odbc 
--with-java --with-openssl --with-pam --enable-syslog --enable-debug --enable-cassert 
--enable-depend



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[BUGS] need help

2002-10-04 Thread muhammad suhail
i have installed cygwin with posgresql 7.2.1-2 on windows 2000 ..with the user name muhammad .
after that i have installed pgODBC AND Pgadmin2 ..
i have initilaized new data base(testdb1) and then run postmaster it gives me very stange error ..please find the attachment ..
yesterday it was working properly ..today morning i have changed some pg_hba.conf and postgresql.conf file parameters...
i have already done new intallation but it is still giving me same problem 
plz write me what could be the problem 
suhail
 
 may you live longDo you Yahoo!?
New DSL Internet Access from SBC & Yahoo!<>

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[BUGS]

2002-10-04 Thread Venkatesh Phadnavis


I had two separate problems while compiling postgresql on my machine,
information attached. Once, java refused to recognize the symbol
javax.transaction.RolledbackException, and the second time some kind of VM
error occurred. I am running a RedHat Linux 7.3 (2.4.8-10), with
j2sdk1.4.1. 

Please let me know.

Thanks for your efforts and help.

-V




make -C doc all
make[1]: Entering directory `/root/tgz/postgresql-7.2.2/doc'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/root/tgz/postgresql-7.2.2/doc'
make -C src all
make[1]: Entering directory `/root/tgz/postgresql-7.2.2/src'
make -C backend all
make[2]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend'
make -C access all
make[3]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/access'
make -C common SUBSYS.o
make[4]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/access/common'
make[4]: `SUBSYS.o' is up to date.
make[4]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/access/common'
make -C gist SUBSYS.o
make[4]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/access/gist'
make[4]: `SUBSYS.o' is up to date.
make[4]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/access/gist'
make -C hash SUBSYS.o
make[4]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/access/hash'
make[4]: `SUBSYS.o' is up to date.
make[4]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/access/hash'
make -C heap SUBSYS.o
make[4]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/access/heap'
make[4]: `SUBSYS.o' is up to date.
make[4]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/access/heap'
make -C index SUBSYS.o
make[4]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/access/index'
make[4]: `SUBSYS.o' is up to date.
make[4]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/access/index'
make -C nbtree SUBSYS.o
make[4]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/access/nbtree'
make[4]: `SUBSYS.o' is up to date.
make[4]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/access/nbtree'
make -C rtree SUBSYS.o
make[4]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/access/rtree'
make[4]: `SUBSYS.o' is up to date.
make[4]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/access/rtree'
make -C transam SUBSYS.o
make[4]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/access/transam'
make[4]: `SUBSYS.o' is up to date.
make[4]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/access/transam'
make[3]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/access'
make -C bootstrap all
make[3]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/bootstrap'
make[3]: Nothing to be done for `all'.
make[3]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/bootstrap'
make -C catalog all
make[3]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/catalog'
make[3]: Nothing to be done for `all'.
make[3]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/catalog'
make -C parser all
make[3]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/parser'
make[3]: Nothing to be done for `all'.
make[3]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/parser'
make -C commands all
make[3]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/commands'
make[3]: Nothing to be done for `all'.
make[3]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/commands'
make -C executor all
make[3]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/executor'
make[3]: Nothing to be done for `all'.
make[3]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/executor'
make -C lib all
make[3]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/lib'
make[3]: Nothing to be done for `all'.
make[3]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/lib'
make -C libpq all
make[3]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/libpq'
make[3]: Nothing to be done for `all'.
make[3]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/libpq'
make -C main all
make[3]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/main'
make[3]: Nothing to be done for `all'.
make[3]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/main'
make -C nodes all
make[3]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/nodes'
make[3]: Nothing to be done for `all'.
make[3]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/nodes'
make -C optimizer all
make[3]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/optimizer'
make -C geqo SUBSYS.o
make[4]: Entering directory `/root/tgz/postgresql-7.2.2/src/backend/optimizer/geqo'
make[4]: `SUBSYS.o' is up to date.
make[4]: Leaving directory `/root/tgz/postgresql-7.2.2/src/backend/optimizer/geqo'
make -C path SUBSYS.o
make[4]: Entering directory `/root/tgz/postgr

[BUGS] unable to build on openbsd-sparc

2002-10-04 Thread Frank Van Damme


Hello,

I have tried for days to get postgresql working on OpenBSD on an old sparc32 
box which I use to learn OpenBSD and Postgresql. I'll try to describe the 
steps I went through as precise as possible.

First, I bought a sparc station 2 - an old box in a good condition, I've run 
debian gnu/linux on it for a while and now I'm trying OpenBSD 3.1. The thing 
has a 40 MHz sparc32 cpu, 2 scsi disks, a cg6 framebuffer and 64 megs of 
memory. 

This are the packages ("ports") I installed on it:

bash-2.05a GNU Bourne Again Shell
libiconv-1.7   character set conversion library
gettext-0.10.40GNU gettext
vim-6.1-no_x11 vi clone, many additional features
samba-2.2.3a   SMB and CIFS client and server for UNIX
bzip2-1.0.2block-sorting file compressor, unencumbered
squid-2.5.PRE5 WWW and FTP proxy cache and accelerator
bind-9.2.0 Berkeley Internet Name Daemon
iodbc-2.50.3   ODBC 2.x driver manager
bison-1.32 GNU parser generator
xfstt-1.1  TrueType font server for X11
gmake-3.79.1   GNU make
tcl-8.3.4  Tool Command Language
tcl-8.0.5  Tool Command Language

I didn't have any problems to get other things working and the system is 
stable so I think a part failure is highly unlikely. 

Then, I downloaded the OpenBSD ports tree, which as you may know, is merely a 
collection of Makefiles which help you to download and compile a source tree 
of a program, make a package out of it and install that. OpenBSD 3.1 has a 
port for 7.1.3. The "make && make install" went fine and the package 
installed with no errors. 

Then, according to the openBSD Readme files, I added a new user "postgresql"
(
# useradd -c "PostgreSQL Admin User" -g =uid -m -d /var/postgresql \
-s /bin/sh postgresql

)

Then, upon creating a new database,The proces coredumps (where? I cannot find 
the core file back!). I also asked it to output extra debugging info.

# su - postgresql
# initdb -D /var/postgresql/data -d 2>initdb-errors >initdb-output

These 2 files are in attachment. I also tried a pre-built binary package of 
Postgresql 7.1.3 which gave exactly the same error.



7.1.3 failed to initialise so I checked out wether it really was the latest 
version. I downloaded 7.2.2 from your website and tried to build that.

To make a long story short, the build failed. I extracted the tarbal somewhere 
and ran the following commands:

# ./configure --prefix=/usr/local/postgresql-7.2.2 --enable-multibyte  
--enable-odbc --with-unixodbc >configure-output 2>configure-errors 

This went OK and the errors file was empty. Those 

# gmake >gmake-output 2>gmake-errors

This took about an hour and a half, the results are in attachment. 
This writer has a writer block and hopes for the postgresql crew to finish the 
story. A final note: I also tried installing OpenBSD 3.1 for x86 and using 
the port (7.1.3). Building, installing and initialising went without a hitch, 
so it must be something sparc32-specific. 




-- 
Frank Van Damme
homepage:   www.student.kuleuven.ac.be/~m9917684
jabber (=IM):   [EMAIL PROTECTED]




errorlogs.tar.bz2
Description: application/tbz


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [BUGS] 7.3B2 initdb fails

2002-10-04 Thread Tom Lane

Nic Ferrier <[EMAIL PROTECTED]> writes:
> DEBUG:  inserting column 7 value '0'
> DEBUG:  Typ != NULL
> ERROR:  Bad float4 input format -- underflow

Hm.  This is evidently failing this test:

static void
CheckFloat4Val(double val)
{
if (fabs(val) > FLOAT4_MAX)
elog(ERROR, "Bad float4 input format -- overflow");
if (val != 0.0 && fabs(val) < FLOAT4_MIN)
elog(ERROR, "Bad float4 input format -- underflow");
}

which is a tad surprising for an input of '0'.  I think perhaps you
have a compiler bug there.  Want to put a breakpoint at CheckFloat4Val
and see what's being passed in?

> I'm using Debian GNU-Linux for PPC on an iMac. 7.2 builds and
> initdb's ok.

FWIW, I've built recent sources on a PowerBook without problems,
under both LinuxPPC and OS X.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [BUGS] bigint and indexes

2002-10-04 Thread Tom Lane

"Bill Schneider" <[EMAIL PROTECTED]> writes:
> This seems to be a well-known and documented issue.  Are there already plans
> to fix this in an upcoming release?

Yes.

> Is the problem here in the optimizer itself, or in the parser?

The problem is that the query is interpreted as "int8col int84eq int4const"
and int84eq is not one of the operators in the index's opclass.  The
planned fix is to get rid of int84eq (and most other cross-datatype
operators) so that the parser is forced to select plain int8eq and
up-convert the literal to int8.

See past discussions of numeric coercion rules in pgsql-hackers archives
if you want to know more about why this is a difficult issue.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [BUGS] Bug in Function-Transactions?

2002-10-04 Thread Tom Lane

"Josh Berkus" <[EMAIL PROTECTED]> writes:
> Postgres terminated my back-end connection to the server when it
> reached the VACUUM statement.

> Next, I reconnected. I was quite surprised to discover that Postgres
> had *not* rolled back the changes made by the function before it
> crashed.

Yeah.  The problem here is precisely that VACUUM does internal commits
--- so it committed your function's earlier changes too.  When you
returned from the VACUUM, the function's execution context was gone
as a byproduct of post-commit cleanup.  Oops.  VACUUM is disallowed
inside functions as of 7.3 to prevent this problem.

I don't think you need to be too worried about database corruption
as a result of this experiment, fortunately.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[BUGS] Bug in Function-Transactions?

2002-10-04 Thread Josh Berkus

Folks,

I just encountered a very interesting bug in ver 7.2.1.

As an experiment, I tried running a VACCUUM ANALYZE on a table inside a
Function after the function had made a large number of updates to that
table.   I assumed that this wouldn't work, but I thought I'd give it a
try.

It did not work.  What it did was interesting.

Postgres terminated my back-end connection to the server when it
reached the VACUUM statement.

Next, I reconnected. I was quite surprised to discover that Postgres
had *not* rolled back the changes made by the function before it
crashed.   

I'm testing to see if I can reproduce this issue on 7.3b1.   I'll
e-mail you with a test database if I can.

-Josh Berkus

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [BUGS] Bug in Function-Transactions?

2002-10-04 Thread Bruce Momjian

Josh Berkus wrote:
> Folks,
> 
> I just encountered a very interesting bug in ver 7.2.1.
> 
> As an experiment, I tried running a VACCUUM ANALYZE on a table inside a
> Function after the function had made a large number of updates to that
> table.   I assumed that this wouldn't work, but I thought I'd give it a
> try.
> 
> It did not work.  What it did was interesting.
> 
> Postgres terminated my back-end connection to the server when it
> reached the VACUUM statement.
> 
> Next, I reconnected. I was quite surprised to discover that Postgres
> had *not* rolled back the changes made by the function before it
> crashed.   
> 
> I'm testing to see if I can reproduce this issue on 7.3b1.   I'll
> e-mail you with a test database if I can.

You will not be able to reproduce it.  7.3 disabled VACUUM inside
functions for that exact reason.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [BUGS] Bug #789: Transaction Archival Logging -- Hot Backups

2002-10-04 Thread Bruce Momjian


We will have point-in-time recovery in 7.4.  We are just releaseing
7.3beta now.

---

Jon Watte wrote:
> 
> Again, thank you for your reply. I am copying the bugs list in the
> hope that some ray of insight will hit.
> 
> I looked at this a little more, and it seems pg_dump does not actually
> do what I need. If the database goes down and I lose my main data store,
> then I will lose all transactions back to the time I did the pg_dump.
> 
> Other databases (i e Oracle) solves this by retaining their transaction
> journal for some predetermined time (at least as long as the interval
> between data store backups). Typically, you will put this journal on
> some physically separate storage with a physically separate controller
> (maybe even on tape, or on a remote site). Then, when you lose your
> data store, you can restore the data store from back-up, and then re-
> play your archive log, and avoid losing any committed transactions. If
> you lose your archive log store, the database is still intact, and you
> should immediately failover to a new archive store and start a full
> data store backup. If you lose both, then you HAVE to accept the fact
> that you will lose previously committed transactions, but the likelihood
> of this actually happening with the right physical set-up is very very
> slim (as opposed to the likelihood of just one part going down, which
> is almost inevitable).
> 
> For reference, this one lacking feature is preventing the company I work
> at from using PostgreSQL, because we have operational requirements that
> need this "fast path" recovery in the common case. Unfortunately, we'd
> rather pay Oracle lots of money than lose time having to implement it in
> the PostgreSQL code :-(
> 
> Cheers,
> 
>   / h+
> 
> 
> > -Original Message-
> > From: Bruce Momjian [mailto:[EMAIL PROTECTED]]
> > Sent: Saturday, September 28, 2002 10:19 PM
> > To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> > Subject: Re: [BUGS] Bug #789: Transaction Archival Logging -- Hot
> > Backups
> >
> >
> >
> > I see in the pg_dump manual page:
> >
> >pg_dump  makes  consistent backups even if the database is
> >being used concurrently.  pg_dump  does  not  block  other
> >users accessing the database (readers or writers).
> >
> >
> > --
> > -
> >
> > [EMAIL PROTECTED] wrote:
> > > Jon Watte ([EMAIL PROTECTED]) reports a bug
> > > with a severity of 2 The lower the number the more severe it
> > > is.
> > >
> > > Short Description Transaction Archival Logging -- Hot Backups
> > >
> > > Long Description I see no mention of transaction archival logging
> > > in the documentation.
> > >
> > > This means that, even though you support correct transaction
> > > rollback semantics, to back up the database in a consistent
> > > manner, I have to take it offline and backup all the files.
> > >
> > > Either I'm missing something (and I did a documentation, FAQ
> > > and Todo search) or it's not currently possibly to actually put
> > > Postgres into a 24/7 production environment?
> > >
> > > Sample Code
> > >
> > >
> > > No file was uploaded with this report
> > >
> > >
> > > ---(end of broadcast)---
> > > TIP 2: you can get off all lists at once with the unregister
> > > command
> > > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> > >
> >
> > --
> >   Bruce Momjian|  http://candle.pha.pa.us
> >   [EMAIL PROTECTED]   |  (610) 359-1001
> >   +  If your life is a hard drive, |  13 Roberts Road
> >   +  Christ can be your backup.|  Newtown Square,
> > Pennsylvania 19073
> >
> 
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html