pg_dumpall is failing with this error:
pg_dump: query returned more than one (2) pg_database entry for database
"pedcard"
pg_dumpall: pg_dump failed on database "pedcard", exiting
This is 8.0.1 on OS X; where do I start on straightening this out? (There is
only 1 postmaster running, and it seems
On Mon, Mar 28, 2005 at 12:55:52AM -0600, Joseph M. Day wrote:
> > From the "Database Physical Storage" chapter in the 8.0 documentation:
> >
> > When a table or index exceeds 1Gb, it is divided into gigabyte-sized
> > segments. The first segment's file name is the same as the
> > filenode; sub
On Mon, Mar 28, 2005 at 12:29:13AM -0600, Joseph M. Day wrote:
> Can anyone recemmend a filesystem to use for Postgres. I currently have
> one table that has 80 mil rows, and will take roughly 8GB of space
> without indexing. Obviously EXT3 will die for a file size this large.
>From the "Database
Title: Message
Can anyone recemmend
a filesystem to use for Postgres. I currently have one table that has 80 mil
rows, and will take roughly 8GB of space without indexing. Obviously EXT3 will
die for a file size this large. Any suggestions with be
helpful.
Thanks,
Joe,
--
"Mike Mascari" writes
> "Consider parallel processing a single query" should be moved out from
> under Miscellaneous on the TODO list and re-categorized as the formerly
> existent URGENT feature...
>
Yes, inter/inner-operation of PQO could be an obvious winner in some
situations. For example, in
PostgreSQL has made substantial progress over the years and is
approaching enterprise-quality feature sets. However, one of the major
stopping points for enterprise deployment is lack of parallel query
support. DB2, Oracle, even SQL Server Enterprise Edition all have
parallel query support. A r
On Fri, Mar 18, 2005 at 10:12:05PM -0700, Michael Fuhr wrote:
>
> I just submitted a small patch to convert CRLF => LF, CR => LF.
This patch is in 8.0.2beta1, so PL/Python users might want to test
it before 8.0.2 is released. See the recent "8.0.2 Beta Available"
announcement:
http://archives.p
Tom Lane wrote:
=# select distinct prolang from pg_proc;
prolang
-
12
13
14
17813
63209
63212
63213
63214
(8 rows)
That looks fine ...
=# select * from pg_language ;
Try "select oid,lanname from pg_language".
regards, tom lane
Sorry, I see that I forg
"Stephan Szabo" <[EMAIL PROTECTED]> writes
>
> Well, that's not the foreign key necessarily. I don't have a machine to
> test on at the moment (machine currently dead), but I think the same
> happens without a foreign key constraint due to the unique/primary key
> constraint on a.i.
I see. That's
On Sun, Mar 27, 2005 at 07:28:21PM -0500, Kyrill Alyoshin wrote:
>
> I cannot get AFTER INSERT (or UPDATE for that matter) triggers to work.
> The same code works perfectly fine for BEFORE triggers.
You're trying to modify the record but it's too late in an AFTER
trigger. See the "Triggers" ch
Kyrill Alyoshin <[EMAIL PROTECTED]> writes:
> 1. MY FUNCTIONS
> CREATE OR REPLACE FUNCTION insert_stamp() RETURNS TRIGGER AS
> $audit_insert$
> BEGIN
> NEW.created_ts := 'now';
> NEW.updated_ts := 'now';
> RETURN NEW;
> END;
> $audit_insert$
Bricklen Anderson <[EMAIL PROTECTED]> writes:
> =# select distinct prolang from pg_proc;
> prolang
> -
>12
>13
>14
> 17813
> 63209
> 63212
> 63213
> 63214
> (8 rows)
That looks fine ...
> =# select * from pg_language ;
Try "select oid,lanname
On Sun, Mar 27, 2005 at 06:02:25PM -0600, Guy Rouillier wrote:
> With the current implementation, it appears I need to either (1) always
> commit after every inserted row, or (2) single thread my entire insert
> logic. Neither of these two alternatives is very desirable.
I think a usual workarou
I'm trying to move over 50 tables (several over 500MB each) from a 7.4.5
database to 8.0.1 on a regular basis during system testing. (The 8.0.1
system will become the production system soon, probably next month.)
I'd like to have the data table and its indexes built in separate tablespaces
on
Hi guys,
I cannot get AFTER INSERT (or UPDATE for that matter) triggers to work. The same code works perfectly fine for BEFORE triggers.
I am almost ready to think that this is a bug. Just want to run it by you, guys. OK, here it is:
1. MY FUNCTIONS
CREATE OR REPLACE FUNCTION insert_stamp() RE
Michael Fuhr wrote:
> On Sun, Mar 27, 2005 at 12:54:28AM -0600, Guy Rouillier wrote:
>> I'm getting the following in the server log:
>>
>> 2005-03-27 06:04:21 GMT estat DETAIL: Process 20928 waits for
>> ShareLock on transaction 7751823; blocked by process 20929.
>> Process 20929 waits for S
Tom Lane <[EMAIL PROTECTED]> wrote:
> Bill Moran <[EMAIL PROTECTED]> writes:
> > Let's take the following fictional scenerio:
>
> > BEGIN;
> > INSERT INTO table1 VALUES ('somestring');
> > INSERT INTO table1 VALUES ('anotherstring');
> > SELECT user_defined_function();
> > COMMIT;
>
> > In this
Mark Greenbank wrote:
> Hi,
>
> I'm interested in deploying PostgreSQL in a production application
> but I'd like to know if the following features are available:
> - dblinks
Yes, see the dblink contrib module.
> - partition tables
Not declaratively. This topic is discussed very frequently; s
Tom Lane wrote:
Bricklen Anderson <[EMAIL PROTECTED]> writes:
Once I recompile the function, I no longer get that message. Is there
anything else that I can check or do to make this stop happening? Or is
this a sign of things to come (possible corruption, etc?)
Well, the original error sounds li
Hi,
I'm interested in deploying PostgreSQL in a production application but
I'd like to know if the following features are available:
- dblinks
- partition tables
If they are not, are they planned for a (near) future release?
Thanks,
Mark
---(end of broadcast)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> $DB->{AutoCommit} = 0 || die...
As someone pointed out, this always dies. In general, you don't
need (or want) to test the results of setting a variable. Also be
aware that in this case you probably want "or" and not "|" - the former
tests the res
On Sun, Mar 27, 2005 at 06:59:06PM +0100, Julian Scarfe wrote:
> I've got a database (7.4) whose system tables have been long neglected.
> Instead of the 100 or so pages I'd expect for 4000 rows after VACUUM, I've
> got 24,000 pages and a mere 1.4 million unused item pointers.
>
> If it were an
I've got a database (7.4) whose system tables have been long neglected.
Instead of the 100 or so pages I'd expect for 4000 rows after VACUUM, I've
got 24,000 pages and a mere 1.4 million unused item pointers.
If it were an ordinary table, I'd CLUSTER it, as from experience it would be
vastly q
I'm trying to get a handle on how an app I'm looking to roll out's
going to impact the server I'm connecting to, and what sort of
capacity planning's going to be needed to make it all work relatively
well.
I'm looking at around 250-300 simultaneous users, nearly all of them
doing interactive w
On Wed, Mar 23, 2005 at 20:47:36 +0200,
Andrus <[EMAIL PROTECTED]> wrote:
>
> I thought about this.
>
> 1. It seems that user prefer to see separate numbers for each sequence.
>
> First invoice has number 1 , second invoice has number 2
This suggests that invoices for different categories ca
On Mon, Mar 21, 2005 at 08:04:01 +0100,
Szmutku Zoltán <[EMAIL PROTECTED]> wrote:
> Hi everybody ,
>
> I tried using Postgre, but I have some problems.
> I create a constraint ( R1>=0 ), and after connect to server from VFP via
> ODBC .
> In the client program I turn on the transactions . (
I only wondered because I had a situation recently where I had to create
a trigger based on an event in an application (don't want to initiate
the trigger processing until other stuff has happened in the
environment), and if I'd had the CREATE OR REPLACE I could have avoided
the step of checking if
it's odd but the list is delaying messages occasionally for up to 4 hours.
message id: [EMAIL PROTECTED]
arrived 4:30 a.m. pacific time
message id : [EMAIL PROTECTED]
arrived at 2:30 a.m. pacific time.
yet second message is a reply to the first message, and is timestamped
almost 2 hours later in s
On Sun, 2005-03-27 at 00:31 -0500, Madison Kelly wrote:
>What I thought would work was:
>
> $DB->begin_work() || die...
> # a lot of transactions
> $DB->commit() || die...
>
maybe a more complete testcase would be in order.
[EMAIL PROTECTED]:~/test $ cat trans.pl
use DBI;
our $dbh = DBI->c
On Sun, 27 Mar 2005, Qingqing Zhou wrote:
>
> "Michael Fuhr" <[EMAIL PROTECTED]> writes
> > To make sure the referenced key can't change until the transaction
> > completes and the referencing row becomes visible to other transactions
> > (or is rolled back) -- otherwise other transactions could c
"Michael Fuhr" <[EMAIL PROTECTED]> writes
> To make sure the referenced key can't change until the transaction
> completes and the referencing row becomes visible to other transactions
> (or is rolled back) -- otherwise other transactions could change
> or delete the referenced key and not know th
On Sun, Mar 27, 2005 at 12:54:28AM -0600, Guy Rouillier wrote:
> I'm getting the following in the server log:
>
> 2005-03-27 06:04:21 GMT estat DETAIL: Process 20928 waits for ShareLock
> on transaction 7751823; blocked by process 20929.
> Process 20929 waits for ShareLock on transaction 77
32 matches
Mail list logo