Gary Fu <[EMAIL PROTECTED]> writes:
> My question now is why those temporary schemas won't be cleaned
> after I restart the db ?
Just leave them alone and you'll be fine. These tools actually have
had most of the bugs worked out of them ;-) ... if you think pg_dump is
omitting something, you are
Gary Fu wrote:
I tried to use pg_dump to restore (sync) a database, but I noticed that
the system table pg_namespace was not synced.
If you restore a database, entries in pg_namespace will be created if
the dump contains any CREATE SCHEMA statements, i.e. if there are
schemas in your original
I have a table of organizations that has a many-to-many relationship
with itself via another table called relationships. The relationships
table has a serial id primary key and parent_id and child_id integer
fields. The organizations table has a couple thousand records and the
maximum depth is
Hi Tim,
Off the top of my head, from somewhat left field, using filesystems to manage
this sort of effect.
Would "real" tables in a tablespace defined on a ramdisk meet this need? So the
functionality/accessibility of a
physical table is provided, along with the performance of a filesystem act
On Fri, Jun 6, 2008 at 7:58 AM, Merlin Moncure <[EMAIL PROTECTED]> wrote:
> On Tue, May 27, 2008 at 9:24 AM, A B <[EMAIL PROTECTED]> wrote:
> > Whenever I use copy-paste to run code in a terminal window that is
> > running psql, and the code contains a row like
> >
> > IF FOUND THEN
> >
> > then I
Ken Winter wrote:
I understand from
http://www.postgresql.org/docs/8.0/static/datatype-money.html that the
“money” data type is deprecated.
Money is no longer deprecated in newer releases (specifically 8.3),
although I do think it would be wise to push it to numeric.
I think the way to do
I understand from
http://www.postgresql.org/docs/8.0/static/datatype-money.html that the
"money" data type is deprecated.
So I want to convert the data from my existing "money" columns into new
un-deprecated columns, e.g. with type "decimal(10,2)". But every SQL
command I try tells me I can'
On Tue, May 27, 2008 at 9:24 AM, A B <[EMAIL PROTECTED]> wrote:
> Whenever I use copy-paste to run code in a terminal window that is
> running psql, and the code contains a row like
>
> IF FOUND THEN
>
> then I get the words
>
> ABORTCHECKPOINT COMMIT DECLARE EXECUTE
[...]
As
Using postgresql 8.3 on windows 2003 server. I keep seeing this message in
my system log. Checking the times, it seems to coincide with a log
rollover each time, almost as though the db were trying to log something at
precisely the same time as it is closing access to the old file, and before
op
Dan Joo wrote:
db=pg.connect('aqdev','localhost',5432,None,None,'postgres',None)
From the commandline the connection works great, but from a
cgi-script it barfs with the following message:
*InternalError*: could not create socket: Permission denied
My (obvious, granted) guess is that you're r
Just solved it.
For others, here is the solution.
setsebool -P httpd_can_network_connect_db 1
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Dan Joo
Sent: Thursday, June 05, 2008 4:18 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] postgres connection problem
Hi all,
I have a problem connecting to postgres via the python pg module ONLY
from the cgi-scripts.
The command is:
db=pg.connect('aqdev','localhost',5432,None,None,'postgres',None)
>From the commandline the connection works great, but from a cgi-script
it barfs with the following m
Thanks for the advice. I will keep playing with it. Can someone here
comment on EnterpriseDB or another companies paid support? I may
consider this to quickly improve my performance.
Scott Marlowe wrote:
Have you run analyze on the tables? bumped up default stats and re-run analyze?
Best way
is there a tentative release date (week ... month) for postgres-8.3.2 ?
Thanks!
Vlad
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Oh, another point of attack. Always test your queries under just
\timing. You can wrap up like this:
\timing
select count(*) from (subselect goes here);
I've been on a few machines where the cost of explain analyze itself
threw the timing way off.
--
Sent via pgsql-general mailing list (pgsql
Have you run analyze on the tables? bumped up default stats and re-run analyze?
Best way to send query plans is as attachments btw.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Tom Lane escribió:
> "A B" <[EMAIL PROTECTED]> writes:
> > Whenever I use copy-paste to run code in a terminal window that is
> > running psql, and the code contains a row like
> > [...]
> Either avoid copying/pasting tabs, or turn off readline
> (-n option to psql, I think, but check the manual).
Can't you just try to add the column and catch the error? If you're
in a transaction use a user defined function to run it an catch the
exception in pl/pgsql.
On Thu, Jun 5, 2008 at 12:15 PM, Michael P. Soulier
<[EMAIL PROTECTED]> wrote:
> I'm using some simple migration code to execute individua
On Thu, Jun 5, 2008 at 5:36 PM, Tim Tassonis <[EMAIL PROTECTED]> wrote:
> Is there a way to create temporary tables in another way, so they are
> visible between sessions, or do I need to create real tables for my purpose?
> And is the perfomance penalty big for real tables, as they have been writ
On Thu, 2008-06-05 at 14:10 -0400, Francisco Reyes wrote:
> Don't see any activity in the project since 2006. Is that project dead?
>
I think greenplum would be a better place to ask but from what I can
tell, its dead.
Joshua D. Drake
>
--
Sent via pgsql-general mailing list (pgsql-genera
I have a table of organizations that has a many-to-many relationship
with itself via another table called relationships. The relationships
table has a serial id primary key and parent_id and child_id integer
fields. The organizations table has a couple thousand records and the
maximum depth is aro
Tino Wildenhain wrote:
Hi,
Tim Tassonis wrote:
Hi all
I assume this is not an uncommon problem, but so far, I haven't been
able to find a good answer to it.
I've got a table that holds log entries and fills up very fast during
the day, it gets approx. 25 million rows per day. I'm now build
I'm using some simple migration code to execute individual fragments of
SQL code based on the version of the schema. Is there a way to perform
an ALTER TABLE conditionally?
Example:
I want to add column foo to table bar, but only if column foo does not
exist already.
I'm trying to avoid suc
Don't see any activity in the project since 2006. Is that project dead?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
At work I am creating a standard postgresql benchmark suite based on the
queries and operations that we commonly do.
A couple of questions
+ Should I shutdown/restart the DB between runs?
+ How much bigger than memory should my tables be to have a good benchmark?
One issue to keep in mind is that
In response to Tim Tassonis <[EMAIL PROTECTED]>:
>
> Bill Moran wrote:
> > In response to Tim Tassonis <[EMAIL PROTECTED]>:
> >
> >>
> >> Now, with apache/php in a mpm environment, I have no guarantee that a
> >> user will get the same postgresql session for a subsequent request, thus
> >> he w
Hi,
Tim Tassonis wrote:
Hi all
I assume this is not an uncommon problem, but so far, I haven't been
able to find a good answer to it.
I've got a table that holds log entries and fills up very fast during
the day, it gets approx. 25 million rows per day. I'm now building a web
application u
In response to Tim Tassonis <[EMAIL PROTECTED]>:
> Hi all
>
> I assume this is not an uncommon problem, but so far, I haven't been
> able to find a good answer to it.
>
> I've got a table that holds log entries and fills up very fast during
> the day, it gets approx. 25 million rows per day. I
Hi all
I assume this is not an uncommon problem, but so far, I haven't been
able to find a good answer to it.
I've got a table that holds log entries and fills up very fast during
the day, it gets approx. 25 million rows per day. I'm now building a web
application using apache/mod_php where
On Thu, 05 Jun 2008 15:28:55 +0200
Tino Wildenhain <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Bjørn T Johansen wrote:
> > On Thu, 05 Jun 2008 11:06:36 +0100
> > Raymond O'Donnell <[EMAIL PROTECTED]> wrote:
> >
> >> On 05/06/2008 10:52, Bjørn T Johansen wrote:
> >>> If I already have a running database
Hi all - I am wondering if I can get a consensus on what to do about
this minor issue. I have looked through the archives and can't find a
definitive answer.
So I have a new 8.1 install on Linux (have not yet been able to
upgrade to 8.3). The documentation say that autovacuum is enabled by
defau
On Thu, 5 Jun 2008, "James B. Byrne" <[EMAIL PROTECTED]> writes:
> The link http://openssi.org redirects to
> http://openssi.org/cgi-bin/view?page=openssi.html and the most recent
> (pre-)release is discussed here:
> http://sourceforge.net/forum/forum.php?forum_id=768341
Hrm... It didn't 3-4 days
In-Reply-To: : <[EMAIL PROTECTED]>
On: Thu, 05 Jun 2008 09:03:14 +0300, Volkan YAZICI <[EMAIL PROTECTED]> wrote:
> BTW, can you comment on the activity of the OpenSSI project. A project
> with a dead main page (see http://openssi.org) doesn't smell good to
> me. Are there any alive support in the
Hi,
Bjørn T Johansen wrote:
On Thu, 05 Jun 2008 11:06:36 +0100
Raymond O'Donnell <[EMAIL PROTECTED]> wrote:
On 05/06/2008 10:52, Bjørn T Johansen wrote:
If I already have a running database, how can I compare the tables in
the database with the sql script to discover the differences?
You can
Apart from concurrency issues, it is possible that you
have sequence generation problems. Depending on how
you inserted the original rows into the 'purchases' table, it is possible
that the nextval number has not kept-up and is lagging behind.
You need to ensure that 'purchases_purchase_id_se
On Thu, Jun 5, 2008 at 2:15 AM, Craig Ringer
<[EMAIL PROTECTED]> wrote:
> sudo apt-get build-dep postgresql
Thanks, this works perfectly now!
--
Regards,
Richard Broersma Jr.
Visit the Los Angles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug
--
Sent via pgsql-general mailing
Gary Fu wrote:
> I tried to use pg_dump to restore (sync) a database, but I noticed that
> the system table pg_namespace was not synced.
If you restore a database, entries in pg_namespace will be created if
the dump contains any CREATE SCHEMA statements, i.e. if there are
schemas in your original
On Thu, 05 Jun 2008 11:06:36 +0100
Raymond O'Donnell <[EMAIL PROTECTED]> wrote:
> On 05/06/2008 10:52, Bjørn T Johansen wrote:
> > If I already have a running database, how can I compare the tables in
> > the database with the sql script to discover the differences?
>
> You can use pg_dump with t
On 05/06/2008 10:52, Bjørn T Johansen wrote:
If I already have a running database, how can I compare the tables in
the database with the sql script to discover the differences?
You can use pg_dump with the -s option to dump the schema of the
database, and run it through the diff tool of your c
If I already have a running database, how can I compare the tables in the
database with the sql script to discover the differences?
Regards,
BTJ
--
---
Bjørn T Johansen
[EMAIL PROTECTED]
-
Tom Lane wrote:
"Richard Broersma" <[EMAIL PROTECTED]> writes:
Would anyone be able to give any dirction on what I need to do to get
passed this error?
/usr/bin/ld: crt1.o: No such file: No such file or directory
Seems you've got an incomplete installation. On my Fedora machine,
crt1.o is
Le jeudi 05 juin 2008, Joshua D. Drake a écrit :
> You don't have any build tools install. Try:
> apt-get install binutils gcc autoconf flex
Or even better:
apt-get build-dep postgresql-8.3
--
dim
signature.asc
Description: This is a digitally signed message part.
I have a query that takes 2 sec if I run it from a freshly restored
dump. If I run a full vacuum on the database it then takes 30 seconds.
Would someone please comment as to why I would see a 15x slow down by
only vacuuming the DB?
I am using 8.3.1
--
Sent via pgsql-general mailing list (pg
I have a query that takes 2.5 sec if I run it from a freshly restored
dump. If I run a full vacuum on the database it then takes 30 seconds.
Would someone please comment as to why I would see over a 10x slow down
by only vacuuming the DB?
I am using 8.3.1
--
Sent via pgsql-general mailing l
Hi,
I tried to use pg_dump to restore (sync) a database, but I noticed that
the system table pg_namespace was not synced.
I tried the following pg_dump command to just restore that table without
success either.
Does pg_dump support for the system tables or something I missed ?
Is there anothe
3 jun 2008 kl. 16.06 skrev Scott Marlowe:
On Tue, Jun 3, 2008 at 7:41 AM, Henrik <[EMAIL PROTECTED]> wrote:
To be able to handle versions we always insert new folders even
though
nothing has changed but it seemd like the best way to do it.
E.g
First run:
tbl_file 500k new files.
3 jun 2008 kl. 23.31 skrev Joris Dobbelsteen:
Henrik wrote:
Hi list,
I'm having a table with a lots of file names in it. (Aprox 3
million) in a 8.3.1 db.
Doing this simple query shows that the statistics is way of but I
can get them right even when I raise the statistics to 1000.
db=# alt
47 matches
Mail list logo