When trying to move a database from 8.3 to 9.2.5, the plain text pg_dump
did not restore without errors. (I used pg_dump from 9.2.5) The problem
seems to relate to references to other schemas and the schema search paths.
First, here's the error message:
psql:/tmp/spc_test_1005_dump_with_pg_9_2_
(2013/10/02 18:57), Michael Paquier wrote:
wrote:
Who is pgFoundery administrator or board member now? I would like to send e-mail
them. At least, it does not have information and support page in pgFoundery
homepage.
Why don't you consider github as a potential solution?
It is because github
On Sat, 2013-10-19 at 02:17 +0900, Amit Langote wrote:
> Hi,
>
> In 9.2 docs, the first link (i18ngurus) in the further reading section here:
>
> http://www.postgresql.org/docs/9.2/static/multibyte.html
>
> seems to be broken. Should it be updated/removed?
> (I see it's removed in 9.3 docs)
I h
Anson Abraham wrote:
> I'm on Debian Wheezy running postgres 9.3
>
> both boxes are identical.
>
> I see in log file on slaves:
> LOG: could not receive data from client: Connection reset by peer
That looks to me like a client that is connected to the slave
is dying.
Do you have hot standby t
On 10/21/2013 12:50 AM, Ken Tanzer wrote:
When trying to move a database from 8.3 to 9.2.5, the plain text pg_dump
did not restore without errors. (I used pg_dump from 9.2.5) The
problem seems to relate to references to other schemas and the schema
search paths.
First, here's the error message
Ken Tanzer writes:
> When trying to move a database from 8.3 to 9.2.5, the plain text pg_dump
> did not restore without errors. (I used pg_dump from 9.2.5) The problem
> seems to relate to references to other schemas and the schema search paths.
What this looks like to me is that you have a SQL
We've been using pg_dump and pg_restore for many years now and it has always
worked well for us. However, we are currently undertaking a major db
architecture to partition our tenant data into separate postgres schemas
instead of storing all data in the public schema. When attempting to perform
a p
Brian,
* Brian Crowell (br...@fluggo.com) wrote:
> On Sun, Oct 20, 2013 at 4:24 PM, Tom Lane wrote:
> > No, and it's very unlikely that there ever will be, because it's
> > completely against the system structure at a number of levels. However,
> > there's more than one way to skin this cat. Ma
* Ben (ben.fy...@champsoftware.com) wrote:
> is killed off (6GB+ used by a single postmaster process). Here are the
[...]
> Total number of relations across all schemas: 53,154
[...]
> I should also mention that when performing these dumps there is absolutely
> no other DB activity occurring. Do y
2013/10/22 Stephen Frost :
>> You also mentioned an external CMS. Any suggestions?
>
> I'm a big fan of git, but if you really want to keep things in-database
> and track dependencies, etc, it occurs to me that you might be able to
> use an actual table in the database to store the raw form of you
When you say self-contained test case, what is it exactly that you're
looking for? A script that builds out a DB with hundreds of
schemas/relations, a pg_basebackup or something else?
Thanks!
Ben
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@post
Ben,
* Ben (ben.fy...@champsoftware.com) wrote:
> When you say self-contained test case, what is it exactly that you're
> looking for? A script that builds out a DB with hundreds of
> schemas/relations, a pg_basebackup or something else?
Ideally, an SQL script that builds the DB and then a pg_dum
Hi,
I noticed that with synchronous replication I often see locks like this:
-[ RECORD 10 ]-+
locktype | object
database | 0
relation |
page |
tuple |
virtualxid |
transactionid |
classid| 12
On Fri, Oct 18, 2013 at 5:08 PM, David Johnston wrote:
> this thread hasn't really provided a compelling use-case for making a
> change: the example
> provided is too contrived.
>
It seems contrived because I distilled it down from what it originally was.
There is an actual use case in which thi
On Mon, Oct 21, 2013 at 5:27 AM, KONDO Mitsumasa
wrote:
> (2013/10/02 18:57), Michael Paquier wrote:
>>
>> wrote:
>>>
>>> Who is pgFoundery administrator or board member now? I would like to send
>>> e-mail
>>> them. At least, it does not have information and support page in
>>> pgFoundery homepa
On Fri, Oct 18, 2013 at 8:37 PM, David Johnston wrote:
> Merlin Moncure-2 wrote
>> Regardless, the point at hand is whether specific plan semantics down
>> the chain can control whether or not volatile expressions should run.
>> Clearly, at least to me, they should not.
>
> Put differently ideally
Hi All, I am relatively new to postgres and need to upgrade postgres 9.1.1 to
9.1.10. As I understood from the documentation that "Upgrading to a minor
release does not require a dump and restore; merely stop the database
server, install the updated binaries, and restart the server" Is there any
st
I started getting this error after upgrading from 9.2.4 to 9.3.1:
ERROR: tuple to be updated was already modified by an operation triggered
by the current command
HINT: Consider using an AFTER trigger instead of a BEFORE trigger to
propagate changes to other rows.
STATEMENT: DELETE FROM "channe
Merlin Moncure-2 wrote
> Any reliance on that behavior would be wrong because it's expressly
> contraindicated by the documentation.
That makes no practical difference since the decision to make the function
volatile is not conscious due to it being the default; and the current
behavior hides the
On 10/21/2013 10:57 AM, ramistuni wrote:
Hi All, I am relatively new to postgres and need to upgrade postgres
9.1.1 to 9.1.10. As I understood from the documentation that
"Upgrading to a minor release does not require a dump and restore;
merely stop the database server, install the updated bina
ramistuni wrote
> We are currently running "PostgreSQL 9.1.1 on x86_64-unknown-linux-gnu,
> compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-51), 64-bit"
This is packaging (thus Linux Distro) concern and not that of PostgreSQL
proper. As I use Ubuntu and you appear to be using a Red Hat varia
Moshe Jacobson writes:
> I am of the belief that if the function in a CTE is volatile, that it
> should be executed unconditionally.
[ shrug... ] Consider
SELECT volatile_function(i) FROM generate_series(1, 10) i LIMIT 1;
How many times should the volatile function get executed? If yo
=?ISO-8859-1?Q?Torsten_F=F6rtsch?= writes:
> I noticed that with synchronous replication I often see locks like this:
> [ AccessExclusiveLock on "database 0" ]
You did not say what PG version you're talking about, but if it's recent
then this must be coming from PreCommit_Notify, which takes such
Thanks so much David and John for your quick responses.
These databases were recently transitioned to me from a team which are not
part of our organization any more.
Please bare with my knowledge, but is there any way to find if the past
installations were done using YUM?
We have been given VMs
On 10/21/2013 12:00 PM, ramistuni wrote:
Please bare with my knowledge, but is there any way to find if the past
installations were done using YUM?
This is all standard redhat installation/administration stuff...
# rpm -qf /usr/pgsql-9.2/bin/postgres
postgresql92-server-9.2.5-1PGDG.rhel6
On 10/21/2013 12:15 PM, John R Pierce wrote:
# rpm -qf /etc/yum.repos.d/pgdg-9*
pgdg-centos90-9.0-5.noarch
pgdg-centos92-9.2-6.noarch
you should see the 91 version.
oh, to clarify something.the version of that pgdg-.noarch
repository file is NOT directly related to the s
On Mon, Oct 21, 2013 at 2:53 PM, Tom Lane wrote:
> Consider
>
> SELECT volatile_function(i) FROM generate_series(1, 10) i LIMIT 1;
>
> How many times should the volatile function get executed? If your answer
> is not "10", how is this different from the CTE case? This LIMIT clause
> is
Moshe Jacobson writes:
> On Mon, Oct 21, 2013 at 2:53 PM, Tom Lane wrote:
>> Consider
>>
>> SELECT volatile_function(i) FROM generate_series(1, 10) i LIMIT 1;
>>
>> How many times should the volatile function get executed? If your answer
>> is not "10", how is this different from the CTE case?
Tom Lane-2 wrote
> Moshe Jacobson <
> moshe@
> > writes:
>> I am of the belief that if the function in a CTE is volatile, that it
>> should be executed unconditionally.
>
> [ shrug... ] Consider
>
> SELECT volatile_function(i) FROM generate_series(1, 10) i LIMIT 1;
>
> How many times sh
David Johnston writes:
> The two comparable queries are:
> A) WITH vf ( SELECT volatile_function(x) FROM generate_series(1,10) )
> SELECT * FROM vf LIMIT 1
> B) SELECT volatile_function(x) FROM generate_series(1,10) gs (x) LIMIT 1
> In (A) the relation "vf" - which is a 10-row table with the re
On Mon, Oct 21, 2013 at 2:51 PM, Tom Lane wrote:
> Moshe Jacobson writes:
>> On Mon, Oct 21, 2013 at 2:53 PM, Tom Lane wrote:
>>> How many times should the volatile function get executed? If your answer
>>> is not "10", how is this different from the CTE case? This LIMIT clause
>>> is restrict
Tom Lane-2 wrote
> try a set-returning function in the
> select list to see that this is true.
Random thoughts...
Noted - though then there appears to be various optimizations at play here
then...
[somewhat dated 9.0.X version]
SELECT x, generate_series(x, 5) AS y FROM generate_series(1,3) gs (
On 21/10/2013 20:40, Moshe Jacobson wrote:
I don't think your example above is analogous, because in your
example, you are asking /how many times/ to execute the function,
whereas in my example, the question is /whether/ to execute the query
at all.
If the outer statement of the CTE doesn't nee
I've only skimmed this thread, but clearly, this is why using functions
with side effects in the middle of complex queries is a bad idea. =)
Something like SELECT func_with_side_effect(1); is probably fine, but
beyond that, put the function in the middle of a DO block or something and
actually code
On 10/21/2013 3:52 PM, BladeOfLight16 wrote:
I've only skimmed this thread, but clearly, this is why using
functions with side effects in the middle of complex queries is a bad
idea. =) Something like SELECT func_with_side_effect(1); is probably
fine, but beyond that, put the function in the mi
Hi,
Is there a way in Postgresql C function to get the connected session
pointer ( Archive * AH)
and use it for further execution?
Thanks.
On Mon, Oct 21, 2013 at 6:52 PM, BladeOfLight16 wrote:
> In my opinion, the simplest and most correct way to handle this is to
> document that there are no guarantees about what will happen with volatile
> functions in these strange cases. PostgreSQL shouldn't have to make
> guarantees about wheth
BladeOfLight16 wrote
> Regarding UPSERT in particular, are you working with a single row or a set
> of rows? If a single row, is there a reason you can't perform a SELECT
> before hand to see if the PK is already there and then INSERT or UPDATE
> accordingly? If multiple rows, is there a reason you
I have a table of event_id, event_time. Many times, several events
happen in a row. I'd like a query which replaces all of those events
with a single record, showing the count.
Eg: Take A,A,A,B,C,A,D,A,A,D,D,B,C,C and return: A,3; B,1; C,1; A,1;
D,1; A,2; D,2; B,1; C,2
How can I do that?
--
Robert James wrote
> I have a table of event_id, event_time. Many times, several events
> happen in a row. I'd like a query which replaces all of those events
> with a single record, showing the count.
>
> Eg: Take A,A,A,B,C,A,D,A,A,D,D,B,C,C and return: A,3; B,1; C,1; A,1;
> D,1; A,2; D,2; B,1;
On Mon, Oct 21, 2013 at 6:55 AM, Adrian Klaver wrote:
> What was the pg_dump command you used to dump the database?
>
pg_dump -p -h localhost -F p -U spc_ag spc_test_1005 >
~/spc_test_1005_dump_with_pg_9_2_5.sql
> So to be clear, housing_project_from_unit was not restored at all unless
>
On Mon, Oct 21, 2013 at 7:47 AM, Tom Lane wrote:
> Ken Tanzer writes:
> > When trying to move a database from 8.3 to 9.2.5, the plain text pg_dump
> > did not restore without errors. (I used pg_dump from 9.2.5) The problem
> > seems to relate to references to other schemas and the schema searc
Hello,
I need to trigger a database dump from a SQL script (or function, but I
think that is even less likely).
I know I can do:
\! pg_dumpall > /mydir/myfile
Which is fine, but I need to use a variable to set the name.
\set myfile 'filename'
\! pg_dumpall > /mydir/:myfile
Doesn't seem to wor
On Tue, Oct 22, 2013 at 8:07 AM, James Sewell wrote:
> Hello,
>
> I need to trigger a database dump from a SQL script (or function, but I
> think that is even less likely).
>
> I know I can do:
>
> \! pg_dumpall > /mydir/myfile
>
> Which is fine, but I need to use a variable to set the name.
>
> \
That looks great, but it doesn't really help with my problem unless I'm
missing something (very possible!)
I need a way to backup either from SQL in PSQL (possibly \!) or from a
PG/PLSQL function to a file with a name set from a :variable.
This would be triggered by a certain action in the databa
On Tue, Oct 22, 2013 at 1:20 AM, James Sewell wrote:
> That looks great, but it doesn't really help with my problem unless I'm
> missing something (very possible!)
>
> I need a way to backup either from SQL in PSQL (possibly \!) or from a
> PG/PLSQL function to a file with a name set from a :varia
On Tue, Oct 22, 2013 at 7:20 AM, James Sewell wrote:
> I need a way to backup either from SQL in PSQL (possibly \!) or from a
> PG/PLSQL function to a file with a name set from a :variable.
Could it be something like this:
# \setenv myFile 'filename'
# \! pg_dump > $myFile
Unfortunately there
47 matches
Mail list logo