> The advice I've received upon asking this question in the past is
clients first then server. We are currently running many 9.1 clients
against some servers awaiting upgrade
> - the most ancient of which is
7.4. The only problem (annoyance, really) is that in interactive
psql sess
Hi PostgreSQL users,
I'm having difficulty migrating a postgres 8.4.11 database to postgres
9.1.2, neither of the included pg_dumpall tools appear to honour the -o or
--oids options and fail to dump the table oids from the old database as we
require.
I've tried various combinations and orders of
On 1 May 2012 11:12, Matthew Churcher wrote:
> Hi PostgreSQL users,
>
> I'm having difficulty migrating a postgres 8.4.11 database to postgres
> 9.1.2, neither of the included pg_dumpall tools appear to honour the -o or
> --oids options and fail to dump the table oids from the old database as we
>
great stuff!
was that already in it?
i'd plea for adding recognition of gzipped data too..
cheers,
WBL
On Wed, Apr 25, 2012 at 11:05 PM, Guillaume Lelarge
wrote:
> On Wed, 2012-04-25 at 10:40 +0200, Willy-Bas Loos wrote:
> > On Wed, Apr 25, 2012 at 9:51 AM, Magnus Hagander >wrote:
> >
> > > We
Thanks Thom, that's really useful to know however I've been unable to get
it working with pg_dump either. Are you able to offer any insight there?
What command line options are you using?
I get the same result with:
pg_dump -o mydatabase
pg_dump mydatabase
Thanks again, Matt
-Original M
On 1 May 2012 11:55, Matthew Churcher wrote:
> Thanks Thom, that's really useful to know however I've been unable to get
> it working with pg_dump either. Are you able to offer any insight there?
> What command line options are you using?
>
> I get the same result with:
> pg_dump -o mydatabase
>
OK, I think I've worked out what's going on. I've got my wires crossed
between table column OIDS (deprecated) and the OID which uniquely identifies
each table (?always enabled?).
We're not using OID for each column, only to reference the tables themselves
as that's how triggers are referring to th
On 1 May 2012 11:22, Thom Brown wrote:
> On 1 May 2012 11:12, Matthew Churcher wrote:
>> Hi PostgreSQL users,
>>
>> I'm having difficulty migrating a postgres 8.4.11 database to postgres
>> 9.1.2, neither of the included pg_dumpall tools appear to honour the -o or
>> --oids options and fail to du
On 1 May 2012 12:37, Matthew Churcher wrote:
> OK, I think I've worked out what's going on. I've got my wires crossed
> between table column OIDS (deprecated) and the OID which uniquely identifies
> each table (?always enabled?).
>
> We're not using OID for each column, only to reference the table
Thanks Joe and John,
You are right. I all needed was that dev package. On Ubuntu 4.4.3. I
ran: sudo apt-get install postgresql-server-dev-9.1
and then ran through the PLR install with no problem.
Thanks so much for the help.
Daniel
---
The triggers are being used to track changes to the tables. The developers
are concerned that using string references for the table names in this case
would create too much overhead as this is a frequent operation and is
performance critical.
Sounds like we have the choice of using string names or
Hi there,
I have mission-critical data running on PostgreSQL 8.3. My database got
corrupted a few days ago as I ran out of disk space.
I had to run pg_resetxlog to get the database started again. I am now
experiencing the following errors:
1. ERROR: t_xmin is uncommitted in tuple to be updated
We recently installed the GreenPlum massively Parallel Appliance which
is using Postgres version 8.2.15 and I have been asked to provide a
security report showing all user and group roles and the access they
have to the user data and views. In the Postgres catalog tables I have
located the tables w
"Matthew Churcher" writes:
> The triggers are being used to track changes to the tables. The developers
> are concerned that using string references for the table names in this case
> would create too much overhead as this is a frequent operation and is
> performance critical.
"Premature optimiza
On 05/01/2012 05:06 AM, Matthew Churcher wrote:
The triggers are being used to track changes to the tables. The developers
are concerned that using string references for the table names in this case
would create too much overhead as this is a frequent operation and is
performance critical.
Sound
Merlin Moncure wrote:
On Mon, Apr 30, 2012 at 1:47 AM, Alexander Reichstadt wrote:
Hi,
From the documentation I was able to build a trigger firing upon deletion of a
record a function that delivers tablename_operation as a notification one needs
to subscribe to. So in terminal I can say LIST
Hi, take a look at pg_class table, column relacl
http://www.postgresql.org/docs/8.2/static/catalog-pg-class.html
The opposite way (does a user has privilages to...) is set of build in
functions
http://www.postgresql.org/docs/8.2/static/functions-info.html#FUNCTIONS-INFO-ACCESS-TABLE
hope this h
2012/4/26 Tom Lane
>
> I've applied a patch for this. Thanks for the report!
>
>regards, tom lane
>
Thanks for Your time :)
Regards,
Bartek
I would like to prevent overlapping dates ranges for col1 + col2 from
being inserted into my test table.
Existing Data:
1, FOO, 2012-04-04, 2012-04-06
Insert Attempts:
1, FOO, 2012-04-05, 2012-04-08 <-- BAD, overlaps w/ above!
1, BAR, 2012-04-04, 2012-04-06 <-- OK, no conflict!
2, FOO, 2012-04-04
On Tue, May 1, 2012 at 10:15 AM, bradford wrote:
> I'm trying to used what I learned in
> http://www.depesz.com/2010/01/03/waiting-for-8-5-exclusion-constraints/,
> but I cannot figure out how to apply this exclusion constraint to col1
> (integer) + col2 (varchar).
Take a look at Btree_gist index
Thanks, Richard, but mostly through just guessing. I need to research
what GIST is and how the addition of col1 and col2 to that is making
this work.
With
psql -d mytest -c "CREATE EXTENSION btree_gist;"
This seems to work now:
CREATE TABLE test (
id INTEGER NOT NULL DEFAULT nextval('test_id
Hi,
I played with this problem few months ago and found out that
mulitidimentional cube could be a solution (
http://www.postgresql.org/docs/9.1/static/cube.html).
If You have col1 and date1, date2 then Your cube is a simple line in 2
dimensional space - axis: col1, date (line between points X, Y1
Hi
I think for overlaping exclusion constraint you need period extension
or range datatype in 9.2
Kind Regards,
Misa
Sent from my Windows Phone
From: bradford
Sent: 01/05/2012 19:16
To: pgsql-general@postgresql.org
Subject: [GENERAL] How do I setup this Exclusion Constraint?
I would like to pre
hi
I want to install madlib into a postgresql9.1.3 installation.
i am trying to:
a) make python2.7
b) configure postgresql to point at the python2.7 working directory, and
c) install postgresql9.1.3 (using python2.7)
no success to date.
to date my steps have been:
1. configure/make python2.7 (
The framework I am using is PGSQLKit for Mac OS X for which the source was
available and which I downloaded and altered since. Actually it gives you
access down to the pqlib calls.
The connection being closed was one of the issues. I had added convenience
classes with class methods to the frame
Some of my functions are running much slower than doing the same query
"inline" and I'd like to know if there's a way to fix that.
I have a number of tables that store data valid at different times. For
each logical entity there may be multiple rows, valid at different times
(sometimes overlap
On Wed, May 2, 2012 at 12:43 PM, Evan Martin
wrote:
> Some of my functions are running much slower than doing the same query
> "inline" and I'd like to know if there's a way to fix that.
> [chomp analysis and examples]
Is there any possibility that you could recode your functions as
views? The op
Evan Martin writes:
> Some of my functions are running much slower than doing the same query
> "inline" and I'd like to know if there's a way to fix that. ...
> This is quite slow, especially when I have a WHERE clause that narrows
> down the set of rows from 100,000 to 10 or so. I suspect it's
On tis, 2012-05-01 at 12:56 -0700, Mark Rostron wrote:
> hi
> I want to install madlib into a postgresql9.1.3 installation.
>
>
> i am trying to:
> a) make python2.7
> b) configure postgresql to point at the python2.7 working directory,
> and
> c) install postgresql9.1.3 (using python2.7)
cd Pyt
Hi,
I got very inefficient plan for a simple query.
PostgreSQL 9.0.7 on FreeBSD, default_statistics_target=1000
Table:
Game2=# \d sb_messages
Table "public.sb_messages"
Column| Type
|Modifiers
---
Maxim Boguk writes:
> I got very inefficient plan for a simple query.
It looks like the problem is with the estimate of the antijoin size:
> -> Nested Loop Anti Join (cost=0.00..24576.82 rows=1 width=206)
> (actual time=0.043..436.386 rows=20761 loops=1)
that is, only about 20% of th
On Wed, May 2, 2012 at 2:50 PM, Tom Lane wrote:
> Maxim Boguk writes:
> > I got very inefficient plan for a simple query.
>
> It looks like the problem is with the estimate of the antijoin size:
>
> > -> Nested Loop Anti Join (cost=0.00..24576.82 rows=1
> width=206)
> > (actual time=0
Thanks, Tom (and Chris). Yes, the EXPLAIN output showed a function scan:
SELECT * FROM thing_asof('2012-04-01') WHERE timeslice_id = 1234
Function Scan on thing_asof (cost=0.25..12.75 rows=5 width=353)
Filter: ((timeslice_id)::integer = 12345)
I replaced the OVERLAPS with < and <= comparison
On Wed, May 2, 2012 at 2:50 PM, Tom Lane wrote:
> Maxim Boguk writes:
> > I got very inefficient plan for a simple query.
>
> It looks like the problem is with the estimate of the antijoin size:
>
> > -> Nested Loop Anti Join (cost=0.00..24576.82 rows=1
> width=206)
> > (actual time=0
Hello all,
Any thoughts on this one?
Cheers.
On Tue, May 1, 2012 at 9:51 PM, Chitra Creta wrote:
> Hi there,
>
> I have mission-critical data running on PostgreSQL 8.3. My database got
> corrupted a few days ago as I ran out of disk space.
>
> I had to run pg_resetxlog to get the database sta
Hello
2012/5/2 Chitra Creta :
> Hello all,
>
> Any thoughts on this one?
>
if you can use buckup. If you can't then you have to delete broken
rows or broken pages and later rebuild all indexes
see http://blog.endpoint.com/2010/06/tracking-down-database-corruption-with.html
Regards
Pavel Stehul
36 matches
Mail list logo