thank you.
it listens to the local network, which are my two other MacBooks. I reported
the bug to DaVinci. Hopefully the have a fix then I can put it back on md5.
thank you for your script.
will try it.
best
Robert
> On 13 Nov 2016, at 23:27, John R Pierce wrote:
>
> On 11/13/2016 1:01 PM,
Thank you, I understand. Nobody else can access the database.
As the database backup button in DaVinci started to work I could understand
that DaVinci is only making pg_dump of the database. The restore works fine
from any DaVinci installation to another one. I tested this. I still can make a
au
On 11/13/2016 1:01 PM, aws backup wrote:
thank you so much.
With your help I could solve all my problems.
DaVinci has a bug somewhere with the database configuration.
I installed everything new and set all auth method to trust instead of md5
which was default from DaVinci.
trust is not very se
On 11/13/2016 01:01 PM, aws backup wrote:
Hi,
thank you so much.
With your help I could solve all my problems.
DaVinci has a bug somewhere with the database configuration.
I installed everything new and set all auth method to trust instead of md5
which was default from DaVinci.
Just remember
Hi,
thank you so much.
With your help I could solve all my problems.
DaVinci has a bug somewhere with the database configuration.
I installed everything new and set all auth method to trust instead of md5
which was default from DaVinci.
Now everything is working as expected. No errors anymore wit
On 11/13/2016 9:13 AM, Adrian Klaver wrote:
For example: How can I restart the SQL server? Oh sorry we can't help
you with this … ?
https://www.postgresql.org/docs/9.5/static/app-pg-ctl.html
the best way of doing this depends on the way postgres was installed,
and varies by OS/distribution
On 11/13/2016 09:04 AM, aws backup wrote:
Hi Adrian,
thank you for the explanation.
I will look into you links.
I am doing this because I want to make backups from the
database. Ideally automatic backups every night.
The Blackmagic support says they can't help me with this. The Backup and
Resto
On 11/13/2016 08:51 AM, aws backup wrote:
Hi Adrian,
I assume that the postgres database password is the one which is shown
in the DaVinci connect database window.
user: postgres
password: DaVinci
But if I work in the Terminal and change the user "sudo su - postgres" I
have to use my admin pas
On 11/13/2016 8:51 AM, aws backup wrote:
I assume that the postgres database password is the one which is shown
in the DaVinci connect database window.
user: postgres
password: DaVinci
there are no database passwords, thats the password for the postgres
database role/user.
But if I work
Hi Adrian,
thank you for the explanation.
I will look into you links.
I am doing this because I want to make backups from the database. Ideally
automatic backups every night.
The Blackmagic support says they can't help me with this. The Backup and
Restore button in DaVinci does not work.
Every
Hi Adrian,
I assume that the postgres database password is the one which is shown in the
DaVinci connect database window.
user: postgres
password: DaVinci
But if I work in the Terminal and change the user "sudo su - postgres" I have
to use my admin password.
From the DaVinci manual:
>
On 11/13/2016 05:51 AM, aws backup wrote:
Hi,
now I have another problem. Sorry I am an absolute beginner.
When I restore the dumpall backup with
"psql -f infile postgres"
I get lot of errors > "already exists" and the database is not restored
to the point of the backup.
Yes, because it seem
On 11/13/2016 05:51 AM, aws backup wrote:
Hi Adrian,
thank you for the answer.
There is one password for the postgres database and one for the postgres user.
How are you determining this?
More to the point are you talking about the application(DaVinci Resolve)
or the database itself?
Both
Hi,
now I have another problem. Sorry I am an absolute beginner.
When I restore the dumpall backup with
"psql -f infile postgres"
I get lot of errors > "already exists" and the database is not restored to the
point of the backup.
I mean after I made the backup I changed something and I expecte
Hi Adrian,
thank you for the answer.
There is one password for the postgres database and one for the postgres user.
Both are not working somehow. Is there a way to look up the passwords? I saw in
the documentation that there is a .pgpass file. But I can't find it.
I changed the auth method to tr
On Sat, Nov 12, 2016 at 2:31 PM, Adrian Klaver
wrote:
> On 11/12/2016 01:20 PM, aws backup wrote:
>>
>> Hi,
>>
>> I try to make pg_dumpall backups from a PostgreSQL 9.5 server which is
>> part of the DaVinci Resolve 12.5.3 App on a Mac OS X 10.11.6 system.
>>
>> Unfortunately I get following failu
On 11/12/2016 01:20 PM, aws backup wrote:
Hi,
I try to make pg_dumpall backups from a PostgreSQL 9.5 server which is part of
the DaVinci Resolve 12.5.3 App on a Mac OS X 10.11.6 system.
Unfortunately I get following failure message:
pg_dumpall: could not connect to database "template1": FATAL
On Tue, Oct 22, 2013 at 10:50 AM, James Sewell wrote:
> That looks great, but it doesn't really help with my problem unless I'm
> missing something (very possible!)
>
> I need a way to backup either from SQL in PSQL (possibly \!) or from a
> PG/PLSQL function to a file with a name set from a :vari
Hi James, i wanna share with you a script that i use, i scheduled it with
crontab. It generates a backup for each database. After that, there is
another script scheduled that copies the backups to another server.
#! /bin/bash
TIEMPO=$(date '+%d%m%Y_%H%M%S')
UBICACION="/var/lib/pgsql/9.3/backups_a
You could write a plperlul function that runs a shell script to back up
your database, you can even pass it parameters and put a call to that in a
trigger.
BUT, this could result in multiple backups running at the same time and
become a performance drag.
--
Mike Nolan
On Tue, Oct 22, 2013 at 9:1
Oh I missed that, I skimmed and thought it was the same as \set
Turns out it's not and it's exactly what I want!
Thanks!
James Sewell,
PostgreSQL Team Lead / Solutions Architect
__
Level 2, 50 Queen St, Melbourne VIC 3000
*P *(+61) 3 8370 8000 * **W* www.
On 10/22/2013 03:41 PM, James Sewell wrote:
Hello All,
Thanks for the replies.Sorry I must have been a bit unclear, I realise I
*could* do this from the shell level, but can I do it from a PSQL
session somehow?
Lucas' \setenv method won't work for you?
Cheers,
James Sewell,
PostgreSQL Tea
Hello All,
Thanks for the replies.Sorry I must have been a bit unclear, I realise I
*could* do this from the shell level, but can I do it from a PSQL session
somehow?
I think the answer is no (I assume unless I write my own C function or
similar).
It seems there is no way of passing a :variable
On 10/21/2013 10:20 PM, James Sewell wrote:
That looks great, but it doesn't really help with my problem unless
I'm missing something (very possible!)
I need a way to backup either from SQL in PSQL (possibly \!) or from a
PG/PLSQL function to a file with a name set from a :variable.
This wou
On 10/21/2013 11:41 PM, Luca Ferrari wrote:
On Tue, Oct 22, 2013 at 7:20 AM, James Sewell wrote:
I need a way to backup either from SQL in PSQL (possibly \!) or from a PG/PLSQL
function to a file with a name set from a :variable.
Could it be something like this:
# \setenv myFile 'filename'
On Tue, Oct 22, 2013 at 7:20 AM, James Sewell wrote:
> I need a way to backup either from SQL in PSQL (possibly \!) or from a
> PG/PLSQL function to a file with a name set from a :variable.
Could it be something like this:
# \setenv myFile 'filename'
# \! pg_dump > $myFile
Unfortunately there
On Tue, Oct 22, 2013 at 1:20 AM, James Sewell wrote:
> That looks great, but it doesn't really help with my problem unless I'm
> missing something (very possible!)
>
> I need a way to backup either from SQL in PSQL (possibly \!) or from a
> PG/PLSQL function to a file with a name set from a :varia
That looks great, but it doesn't really help with my problem unless I'm
missing something (very possible!)
I need a way to backup either from SQL in PSQL (possibly \!) or from a
PG/PLSQL function to a file with a name set from a :variable.
This would be triggered by a certain action in the databa
On Tue, Oct 22, 2013 at 8:07 AM, James Sewell wrote:
> Hello,
>
> I need to trigger a database dump from a SQL script (or function, but I
> think that is even less likely).
>
> I know I can do:
>
> \! pg_dumpall > /mydir/myfile
>
> Which is fine, but I need to use a variable to set the name.
>
> \
On Wed, 2011-06-22 at 16:17 +0100, Thom Brown wrote:
> On 13 May 2010 18:28, Guillaume Lelarge wrote:
> > Le 13/05/2010 19:24, Thom Brown a écrit :
> >> On 13 May 2010 17:49, Guillaume Lelarge wrote:
> >>
> >>> Le 13/05/2010 14:12, Thom Brown a écrit :
> On 5 March 2009 12:08, Thom Brown wr
On 13 May 2010 18:28, Guillaume Lelarge wrote:
> Le 13/05/2010 19:24, Thom Brown a écrit :
>> On 13 May 2010 17:49, Guillaume Lelarge wrote:
>>
>>> Le 13/05/2010 14:12, Thom Brown a écrit :
On 5 March 2009 12:08, Thom Brown wrote:
> [...]
I'm bringing this thread back to life to se
oops! sorry for the noise
it was an innocuous 'NOTICE' not ERROR as claimed.
thanks for your time.
regds
mallah.
On Fri, May 13, 2011 at 7:59 PM, Tom Lane wrote:
> Rajesh Kumar Mallah writes:
>> We migrated to 9.1beta1 from 9.0.x in our development environment.
>> we shall report if any sign
Rajesh Kumar Mallah writes:
> We migrated to 9.1beta1 from 9.0.x in our development environment.
> we shall report if any significant problems or issue arises out of this.
> one small thing that we observed rite now regarding pg_dumpall output is that
> the ALTER USER is referring to some db obj
>> FATAL: Ident authentication failed for user "engineering">>that says it right there. your LOCAL authentication type is set to >IDENT in your pg_hba.conf file.>That was it. I have updated my install documentation. Thank you, Margaret "This e-mail message and any attachment(s) are for the sole u
On 01/21/11 11:07 AM, margaretgil...@chromalloy.com wrote:
After a year I have restarted work on a project using Postgresql. The
database I created was in Postgresql 8.2.5. I updated my database
server from Ubuntu 7.10 to 10.4 (lucid) and installed Postgresql
9.0.2 The Postgresql install went
Le 17/05/2010 14:28, Thom Brown a écrit :
> On 13 May 2010 18:28, Guillaume Lelarge wrote:
>
>> Le 13/05/2010 19:24, Thom Brown a écrit :
>>> On 13 May 2010 17:49, Guillaume Lelarge wrote:
>>>
Le 13/05/2010 14:12, Thom Brown a écrit :
> On 5 March 2009 12:08, Thom Brown wrote:
>> [
On 13 May 2010 18:28, Guillaume Lelarge wrote:
> Le 13/05/2010 19:24, Thom Brown a écrit :
> > On 13 May 2010 17:49, Guillaume Lelarge wrote:
> >
> >> Le 13/05/2010 14:12, Thom Brown a écrit :
> >>> On 5 March 2009 12:08, Thom Brown wrote:
> [...]
> >>> I'm bringing this thread back to lif
On 16/05/2010 16:59, Adrian von Bidder wrote:
> * pint in time recovery
This is what happens *after* work - this side of the pond, anyway :-)
Ray.
--
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
On Thursday 13 May 2010 17.50:31 Wang, Mary Y wrote:
> http://www.postgresql.org/docs/8.3/static/backup-dump.html#BACKUP-DUMP-A
> LL (I only found the documentation for 8.3.10), and it looks like
> pg_dumpall > outfile is the best choice.
pg_dumpall or by-database pg_dump ha the advantage that it
On Thu, May 13, 2010 at 8:16 PM, Scott Marlowe wrote:
> On Thu, May 13, 2010 at 9:50 AM, Wang, Mary Y
> wrote:
> > Hi,
> >
> > I'm running on Postgres 8.3.8. My system admin is ready to set up a cron
> job for a daily database backup.
> > By reading the documentation over here:
> http://www.post
On Thu, 2010-05-13 at 09:15 -0700, Wang, Mary Y wrote:
> By reading the documentation over here:
> http://www.postgresql.org/docs/8.3/static/backup-dump.html#BACKUP-DUMP-ALL (I
> only found the documentation for 8.3.10),
If you replace 8.3 with current, you will get the current docs.
> and it l
On Thu, May 13, 2010 at 9:50 AM, Wang, Mary Y wrote:
> Hi,
>
> I'm running on Postgres 8.3.8. My system admin is ready to set up a cron job
> for a daily database backup.
> By reading the documentation over here:
> http://www.postgresql.org/docs/8.3/static/backup-dump.html#BACKUP-DUMP-ALL (I
>
On Thu, May 13, 2010 at 11:50 AM, Wang, Mary Y wrote:
> it looks like pg_dumpall > outfile is the best choice. I'd like to ask the
> community to reconfirm.
We do an individual pg_dump on each DB separately, using the '-Fc'
format. The only thing we miss from pg_dumpall we get via "pg_dumpall
Le 13/05/2010 19:24, Thom Brown a écrit :
> On 13 May 2010 17:49, Guillaume Lelarge wrote:
>
>> Le 13/05/2010 14:12, Thom Brown a écrit :
>>> On 5 March 2009 12:08, Thom Brown wrote:
[...]
>>> I'm bringing this thread back to life to see if there are any further
>>> thoughts on this. It wo
On 13 May 2010 17:49, Guillaume Lelarge wrote:
> Le 13/05/2010 14:12, Thom Brown a écrit :
> > On 5 March 2009 12:08, Thom Brown wrote:
> >> [...]
> > I'm bringing this thread back to life to see if there are any further
> > thoughts on this. It would be nice to have a complete backup of a
> da
On Thu, 2010-05-13 at 09:15 -0700, Wang, Mary Y wrote:
> Hi,
>
> I'm running on Postgres 8.3.8. My system admin is ready to set up a cron job
> for a daily database backup.
> By reading the documentation over here:
> http://www.postgresql.org/docs/8.3/static/backup-dump.html#BACKUP-DUMP-ALL (I
Le 13/05/2010 14:12, Thom Brown a écrit :
> On 5 March 2009 12:08, Thom Brown wrote:
>> [...]
> I'm bringing this thread back to life to see if there are any further
> thoughts on this. It would be nice to have a complete backup of a database
> cluster in custom format, and use pg_restore to rest
On 5 March 2009 12:08, Thom Brown wrote:
> Thanks for the suggestion Marc. It would still be nice to have the custom
> format included as an option though with pg_restore supporting it, just for
> consistency.
>
> I will, however, follow your recommendation.
>
> Thom
>
> 2009/3/5 Marc Mamin
>
>
> What version are you running? IIRC it should remember the password
> between databases.
8.4.0 on Linux/x86_64. It does not, and man page clearly says:
"pg_dumpall needs to connect several times to the
PostgreSQL server (once per database). If you use password
authentication it will
Krzysztof Barlik wrote:
> Hi,
>
> is it possible to force pg_dumpall to ask for
> password only once (connecting as superuser 'postgres') ?
>
> Entering it for every database is a bit annoying.
What version are you running? IIRC it should remember the password
between databases.
--
Alvaro He
On Tue, 2009-10-06 at 22:59 +0200, Krzysztof Barlik wrote:
> Hi,
>
> is it possible to force pg_dumpall to ask for
> password only once (connecting as superuser 'postgres') ?
>
> Entering it for every database is a bit annoying.
take a look at .pgpass documentation
>
> Thanks,
> Krzysztof
>
2009/10/6 Krzysztof Barlik
> Hi,
>
> is it possible to force pg_dumpall to ask for
> password only once (connecting as superuser 'postgres') ?
>
> Entering it for every database is a bit annoying.
>
Kind of, you'll either use an environment variable:
http://www.postgresql.org/docs/8.4/interact
Hello,
you can first pipe the output of pg_dump to a compression tool:
pg_dump. | gzip > file.
instead of gzip, I'm using pigz which is faster thanks multithreading.
HTH,
Marc Mamin
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-o
Thanks for the suggestion Marc. It would still be nice to have the custom
format included as an option though with pg_restore supporting it, just for
consistency.
I will, however, follow your recommendation.
Thom
2009/3/5 Marc Mamin
> Hello,
>
> you can first pipe the output of pg_dump to a
[EMAIL PROTECTED] wrote:
> I've run into a problem while migrating an existing 8.2.7 data base to a
> new server running 8.3.3 (although I think the version numbers may not
> matter -- I think I've seen this problem in the past and just lived with
> it since so much of Postgresql is so great!).
>
Tom Lane wrote:
> [EMAIL PROTECTED] writes:
>> Thus, when piping the output (from legacy host 192.168.2.2) to
>> populate the newly initialized cluster, by way of running (on the new
>> host 192.168.2.3)
>> pg_dumpall -h 192.168.2.2|psql
>> an error occurs in that first section when the s
Tom Lane wrote:
> [EMAIL PROTECTED] writes:
>> Thus, when piping the output (from legacy host 192.168.2.2) to
>> populate the newly initialized cluster, by way of running (on
the new
>> host 192.168.2.3)
>> pg_dumpall -h 192.168.2.2|psql
>> an error occurs in that first sectio
[EMAIL PROTECTED] writes:
> Thus, when piping the output (from legacy host 192.168.2.2) to
> populate the newly initialized cluster, by way of running (on the new
> host 192.168.2.3)
> pg_dumpall -h 192.168.2.2|psql
> an error occurs in that first section when the script attempts to
> se
Tom Lane schrieb:
> Michael Enke <[EMAIL PROTECTED]> writes:
>> I updated pg_converion to set the condefault=false where I need it.
>
> Why are you needing to replace an existing default conversion?
> Do we have a bug in it?
No, not a bug. But I need CP852 encoding conversion. Maybe you can take
On Wed, May 07, 2008 at 05:38:12PM -0400, Tom Lane wrote:
> Michael Enke <[EMAIL PROTECTED]> writes:
> > I updated pg_converion to set the condefault=false where I need it.
>
> Why are you needing to replace an existing default conversion?
> Do we have a bug in it?
He doesn't really. See his mess
Michael Enke <[EMAIL PROTECTED]> writes:
> I updated pg_converion to set the condefault=false where I need it.
Why are you needing to replace an existing default conversion?
Do we have a bug in it?
> pg_dumpall does NOT write that table out and therefore it is not possible to
> restore, this upd
On Friday 18 January 2008 14:38, Steve Clark wrote:
> Thanks for everyone that replied to my query about pg_dumpall.
>
>
> Now another question/issue - anytime I usr createdb the resulting db
> ends up
> with UTF-8 encoding unless I use the -E switch. Is there a way to make
> the
> default be sql_a
Erik Jones wrote:
On Jan 17, 2008, at 1:08 PM, Greg Smith wrote:
On Thu, 17 Jan 2008, Tom Lane wrote:
There isn't any good way to guarantee time coherence of dumps
across two databases.
Whether there's a good way depends on what you're already doing.
If you're going to the trouble of
On Jan 17, 2008, at 1:08 PM, Greg Smith wrote:
On Thu, 17 Jan 2008, Tom Lane wrote:
There isn't any good way to guarantee time coherence of dumps
across two databases.
Whether there's a good way depends on what you're already doing.
If you're going to the trouble of making a backup usin
On Thu, Jan 17, 2008 at 11:14:22AM -0800, Glyn Astill wrote:
> > begin;
> > set transaction isolation level serializable;
> >
> > --- begin dumping stuff;
> >
>
> Wouldn't that just lock everything so nothing could be updated? Or
> just the table it is outputting?
PostgreSQL uses MVCC, which me
>Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> Glyn Astill wrote:
> > Out of interest, how does pg_dump manage to do a snapshot of a
> > database at an instant in time?
> >
> > My mental picture of pg_dump was just a series of queries dumping
> out
> > the tables...
>
> begin;
> set transaction iso
On Thu, 17 Jan 2008, Tom Lane wrote:
There isn't any good way to guarantee time coherence of dumps across two
databases.
Whether there's a good way depends on what you're already doing. If
you're going to the trouble of making a backup using PITR anyway, it's not
hard to stop applying new l
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> That's intentional, because it doesn't. What you get is a pg_dump
>> snapshot of each database in sequence; those snapshots don't all
>> correspond to the same time instant. There isn't any good way to
>> guarantee time coherence of
Glyn Astill wrote:
> Out of interest, how does pg_dump manage to do a snapshot of a
> database at an instant in time?
>
> My mental picture of pg_dump was just a series of queries dumping out
> the tables...
begin;
set transaction isolation level serializable;
--- begin dumping stuff;
--
Alva
Out of interest, how does pg_dump manage to do a snapshot of a
database at an instant in time?
My mental picture of pg_dump was just a series of queries dumping out
the tables...
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> Steve Clark <[EMAIL PROTECTED]> writes:
> > does pg_dumpall make consistent
Tom Lane wrote:
> Steve Clark <[EMAIL PROTECTED]> writes:
> > does pg_dumpall make consistent backups if the database is being used
> > concurrently?
> > Even though the man page doesn't say it does.
>
> That's intentional, because it doesn't. What you get is a pg_dump
> snapshot of each databas
Steve Clark <[EMAIL PROTECTED]> writes:
> does pg_dumpall make consistent backups if the database is being used
> concurrently?
> Even though the man page doesn't say it does.
That's intentional, because it doesn't. What you get is a pg_dump
snapshot of each database in sequence; those snapshots
Steve Atkins wrote:
On Nov 9, 2007, at 8:52 AM, Tom Hart wrote:
I'm sure you guys have heard this about 100 times, and I've done some
research on Google and found out some things, but I still have a
couple questions.
As I'm sure you may have guessed from the subject, I'm trying to
schedule
On Nov 9, 2007, at 8:52 AM, Tom Hart wrote:
I'm sure you guys have heard this about 100 times, and I've done
some research on Google and found out some things, but I still have
a couple questions.
As I'm sure you may have guessed from the subject, I'm trying to
schedule (under windows) p
2007/8/12, Ketema <[EMAIL PROTECTED]>:
> Hello. I performed a pg_dumpall of my 8.1 db cluster and upgraded to
> 8.2.4. the dump went fine and the script file appears to be fine. I
> have a mixture of UTF8 and SQL_ASCII databases in the dump. However I
> am not able to restore my data after the
Tony Caduto <[EMAIL PROTECTED]> writes:
> I don't think the whole --output-version thing would be a good idea
> either, but it would be a simple matter to add some logic for the
> GRANT ON SEQUENCE, which seems to be the only thing that is really
> causing problems at least between 8.1 and 8.2.
Joshua D. Drake wrote:
other deficiencies let's not add to complexity by having an
--output-version.
If you want that, create a wrapper program that calls to different
statically compiled versions of pg_dump.
Joshua D. Drake
I am afraid that I have to agree with Tom here. Pg_dump has plent
Dave Page wrote:
>
>> --- Original Message ---
>> From: "Joshua D. Drake" <[EMAIL PROTECTED]>
>> To: Dave Page <[EMAIL PROTECTED]>
>> Sent: 15/03/07, 23:51:18
>> Subject: Re: [GENERAL] pg_dumpall and version confusion
>>
Jorge Godoy wrote:
> "Dave Page" <[EMAIL PROTECTED]> writes:
>
>>> --- Original Message ---
>>> From: "Joshua D. Drake" <[EMAIL PROTECTED]>
>>> To: Dave Page <[EMAIL PROTECTED]>
>>> Sent: 15/03/07, 23:51:18
"Dave Page" <[EMAIL PROTECTED]> writes:
>> --- Original Message ---
>> From: "Joshua D. Drake" <[EMAIL PROTECTED]>
>> To: Dave Page <[EMAIL PROTECTED]>
>> Sent: 15/03/07, 23:51:18
>> Subject: Re: [GENERAL] pg_dumpall and
> --- Original Message ---
> From: "Joshua D. Drake" <[EMAIL PROTECTED]>
> To: Dave Page <[EMAIL PROTECTED]>
> Sent: 15/03/07, 23:51:18
> Subject: Re: [GENERAL] pg_dumpall and version confusion
>
> If you want that, create a wrapper program th
Dave Page wrote:
> Tom Lane wrote:
>> I have *no* interest in trying to get it to support an --output-version
>> switch...
>
> There speaks a man who has the luxury of not having to worry about
> multi-version admin tools :-(
I am afraid that I have to agree with Tom here. Pg_dump has plenty of
o
Porting it to Windows could be tricky though, given its use of symlinks
and wrapper scripts.
FWIW, as of Vista and Server Longhorn, Windows now supports Symlinks. I
don't know about wrapper scripts though.
--
Paul Lambert
Database Administrator
AutoLedgers
---(end
On Thu, Mar 15, 2007 at 09:19:31PM +, Dave Page wrote:
> Even if we went through the insane pain (for me at least) of maintaining
> multiple branches of the Windows build of PostgreSQL, *and* custom
> builds of the utilities built against a single libpq, we still couldn't
> ship a single set
Alvaro Herrera wrote:
Dave Page wrote:
Tom Lane wrote:
I have *no* interest in trying to get it to support an --output-version
switch...
There speaks a man who has the luxury of not having to worry about
multi-version admin tools :-(
Is it more difficult for you to ship a pg_dump-8.0, pg_dum
Dave Page wrote:
> Tom Lane wrote:
> >I have *no* interest in trying to get it to support an --output-version
> >switch...
>
> There speaks a man who has the luxury of not having to worry about
> multi-version admin tools :-(
Is it more difficult for you to ship a pg_dump-8.0, pg_dump-8.1, etc
a
Tom Lane wrote:
I have *no* interest in trying to get it to support an --output-version
switch...
There speaks a man who has the luxury of not having to worry about
multi-version admin tools :-(
/D
---(end of broadcast)---
TIP 4: Have you sear
Tony Caduto <[EMAIL PROTECTED]> writes:
> Alvaro Herrera wrote:
>> What's wrong with that? 8.2 will understand the GRANT ON SEQUENCE
>> without a problem.
>>
> Yes, but 8.1 and earlier wont!
You're missing the point entirely. The versioning contract for pg_dump
is that version M.N can dump from
On Thu, 2007-03-15 at 11:53, Tony Caduto wrote:
> Dave Page wrote:
> > This is a problem I've been thinking about on and off recently - and I
> > am starting to come to the conclusion that shipping each version of
> > the utilities is the only way things are likely to work unless someone
> > put
Dave Page wrote:
This is a problem I've been thinking about on and off recently - and I
am starting to come to the conclusion that shipping each version of
the utilities is the only way things are likely to work unless someone
puts some really significant effort into adding backwards
compatibi
Tony Caduto wrote:
Alvaro Herrera wrote:
What's wrong with that? 8.2 will understand the GRANT ON SEQUENCE
without a problem.
Yes, but 8.1 and earlier wont!
If you dump a 8.1 database (with 8.2 pg_dump) and then use that dump to
restore to another 8.1 server, any permissions on sequenc
Alvaro Herrera wrote:
What's wrong with that? 8.2 will understand the GRANT ON SEQUENCE
without a problem.
Yes, but 8.1 and earlier wont!
If you dump a 8.1 database (with 8.2 pg_dump) and then use that dump to
restore to another 8.1 server, any permissions on sequences will fail
to res
Tony Caduto wrote:
> Tom Lane wrote:
> >We try to make pg_dump
> >cope with prior server releases, but since we lack a time machine we
> >can't go back and teach old versions about subsequent server changes...
> >
> Tom,
> How come version 8.2 of pg_dump uses GRANT ON SEQUENCE when dumping a
> 8
Tom Lane wrote:
We try to make pg_dump
cope with prior server releases, but since we lack a time machine we
can't go back and teach old versions about subsequent server changes...
Tom,
How come version 8.2 of pg_dump uses GRANT ON SEQUENCE when dumping a
8.1 or older database?
Was it just a
Alan J Batsford <[EMAIL PROTECTED]> writes:
> When I search my system for anything postgres I find a boatload of 8.0.4
> source, and a boatload of 7.4.8 documentation.
"rpm -qa | grep postgres" would probably prove enlightening. It sounds
like you have postgresql-server at 8.1.4 and the other sub
> Richard Broersma Jr <[EMAIL PROTECTED]> writes:
> >> Are you interested in dumping out the page of pg_index that
> >> contains this record, using pg_filedump?
>
> > I've attached the results.log to the email.
>
> Well, there's our smoking gun:
> 1bfc: 74746e6f 1c00 ttno
Richard Broersma Jr <[EMAIL PROTECTED]> writes:
>> Are you interested in dumping out the page of pg_index that
>> contains this record, using pg_filedump?
> I've attached the results.log to the email.
Well, there's our smoking gun:
Item 11 -- Length: 398 Offset: 6844 (0x1abc) Flags: USED
> >> Are you interested in dumping out the page of pg_index that
> >> contains this record, using pg_filedump?
>
> > Sure, only how do I use pg_filedump?
>
> Find out the ctid of the busted pg_index record; the first part of it is
> the page number. Then
>
> pg_filedump -i -f -R page_numb
Richard Broersma Jr <[EMAIL PROTECTED]> writes:
>> Are you interested in dumping out the page of pg_index that
>> contains this record, using pg_filedump?
> Sure, only how do I use pg_filedump?
Find out the ctid of the busted pg_index record; the first part of it is
the page number. Then
> Are you interested in dumping out the page of pg_index that
> contains this record, using pg_filedump?
Sure, only how do I use pg_filedump?
> In any case, my advice for recovering from the problem once you get
> tired of investigating is to drop and recreate this index. (If you're
> not sure
1 - 100 of 156 matches
Mail list logo