On Wed, 1 Sep 2004, Richard Huxton wrote:
> Date: Wed, 01 Sep 2004 09:45:56 +0100
>
> Serguei A. Mokhov wrote:
> > Hello,
> >
> > Just poking around to see if anyone is working on resurrecting the concept
> > of pg_upgrade after all these years?
>
> You probably want to join the (very recent) thre
Serguei A. Mokhov wrote:
Hello,
Just poking around to see if anyone is working on resurrecting the concept
of pg_upgrade after all these years?
You probably want to join the (very recent) thread subject = "version
upgrade" started by Andrew Rawnsley.
--
Richard Huxton
Archonet Ltd
--
Hello,
Just poking around to see if anyone is working on resurrecting the concept
of pg_upgrade after all these years? If not, I think I have nearly ready a
high-level design of how the whole upgrade procedure will work, but I just
need to clear out some ideas and put them together in a more-or-le
Jan Wieck <[EMAIL PROTECTED]> writes:
> If you want to prevent "accidential" access, start postmaster on a
> non-standard port.
... thus making pg_upgrade subject to all sorts of interesting questions
about whether particular ports get filtered by kernel iptables rules?
This doesn't seem like a r
On Tue, 16 Dec 2003, scott.marlowe wrote:
> > > If you want to prevent "accidential" access, start postmaster on a
> > > non-standard port.
> >
> > That seems like an unfriendly thing to do. You'd have to check to see what
> > port is "standard" for this particular installation, and pick someth
On Tue, 16 Dec 2003, Jon Jensen wrote:
> On Tue, 16 Dec 2003, Jan Wieck wrote:
>
> > If you want to prevent "accidential" access, start postmaster on a
> > non-standard port.
>
> That seems like an unfriendly thing to do. You'd have to check to see what
> port is "standard" for this particular
On Tue, 16 Dec 2003, Jan Wieck wrote:
> If you want to prevent "accidential" access, start postmaster on a
> non-standard port.
That seems like an unfriendly thing to do. You'd have to check to see what
port is "standard" for this particular installation, and pick something
else. You may choos
Tom Lane wrote:
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
Then again, in the case of pg_upgrade, wouldn't just disabling access from
anywhere except localhost prevent others from getting in?
Not if your normal operating mode includes connections from clients
running locally. I really don't s
Hi,
> Alternative thought: just recommend that if possible, people
> take a filesystem dump of their old PGDATA directory after
> stopping the old postmaster. This would be sufficient for
> retreating to the prior version if needed. It might or might
> not be slower than copying all the file
Yeah. Don't you think that should preserve comments on large objects,
now that such comments are alleged to be a supported facility?
How does pg_dump dump the blobs?
It dumps them, reloads them (which causes them to be assigned new OIDs)
and then runs around and tries to fix up references to th
"Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> On Sun, 2003-12-14 at 18:02, Tom Lane wrote:
>> How large N will be in practice remains to be seen, of course, but I'd
>> expect something on the order of 4 or 5.
> Ok, this is what I was looking for. If we are serious about this, would
> it mak
Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
>> Oh, okay, it doesn't work.
> Do you mean using pg_dump with the '-b' option?
Yeah. Don't you think that should preserve comments on large objects,
now that such comments are alleged to be a supported facility?
> How does pg_dump dump the bl
How do you mean? pg_dump never writes out the COMMENT ON commands...
Oh, okay, it doesn't work.
Care to think about how to fix that?
I think you're going to have to explain the exact problem to me - I
don't quite get what you mean?
Do you mean using pg_dump with the '-b' option?
How does pg_
Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
>> (Note to self: I wonder whether the recently-added COMMENT ON LARGE
>> OBJECT facility works at all over dump/reload...)
> How do you mean? pg_dump never writes out the COMMENT ON commands...
Oh, okay, it doesn't work.
Care to think about h
No. Large object OIDs are preserved in the given proposal.
(Note to self: I wonder whether the recently-added COMMENT ON LARGE
OBJECT facility works at all over dump/reload...)
How do you mean? pg_dump never writes out the COMMENT ON commands...
Chris
---(end of broadca
On Sun, 2003-12-14 at 18:02, Tom Lane wrote:
> "Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> > How limiting is the above? Does this mean that pg_upgrade will be
> > rendered invalid if there is an on-disk representation change? Do we
> > think we will make it from 7.4 -> 7.5 without on-disk
Alvaro Herrera <[EMAIL PROTECTED]> writes:
>> Doesn't matter. Catalog entries are dumped and reloaded; there is no
>> carry-forward of OIDs.
> Large objects included?
No. Large object OIDs are preserved in the given proposal.
(Note to self: I wonder whether the recently-added COMMENT ON LARGE
On Sun, Dec 14, 2003 at 09:48:20PM -0500, Tom Lane wrote:
> Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
> > What about cached OIDs in view and function definitions, etc...?
>
> Doesn't matter. Catalog entries are dumped and reloaded; there is no
> carry-forward of OIDs.
>
> I suppose if
Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
> What about cached OIDs in view and function definitions, etc...?
Doesn't matter. Catalog entries are dumped and reloaded; there is no
carry-forward of OIDs.
I suppose if someone were storing OIDs of tables or functions or views
in user tables
No. The proposed pg_upgrade procedure doesn't try to reproduce OIDs of
catalog entries (other than toast-table OIDs, which are never
preassigned anyway), so there's no issue.
Good point though --- thanks for thinking about it.
What about cached OIDs in view and function definitions, etc...?
Like
Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
> Will we now have to be careful to NEVER re-use OIDs in the system catalogs.
No. The proposed pg_upgrade procedure doesn't try to reproduce OIDs of
catalog entries (other than toast-table OIDs, which are never
preassigned anyway), so there's no
Per prior discussion, we will enforce some sort of limit on how often
the representation of user tables/indexes can be changed. The idea will
be to "batch" such changes so that you only have to do a dump/reload
every N major releases instead of every one. In other words, pg_upgrade
will work for
"Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
>> [ pg_upgrade won't be able to change user table representation ]
> How limiting is the above? Does this mean that pg_upgrade will be
> rendered invalid if there is an on-disk representation change? Do we
> think we will make it from 7.4 -> 7.5
On Fri, 2003-12-12 at 14:00, Tom Lane wrote:
> Currently the no-table-contents-changes restriction keeps us from
> upgrading from versions older than 7.4 anyway (since type NUMERIC had its
> on-disk representation changed in 7.4). We could possibly upgrade 7.3
> databases that contain no NUMERIC c
Tom Lane wrote:
>Dave Smith <[EMAIL PROTECTED]> writes:
>
>
>>Why not go the other way.
>>1) Dump the schemas.
>>2) Initdb with the new schemas in a tmp PGDATA
>>3) backup the schemas in the current PGDATA
>>4) move the new schemas from the new db into the current one.
>>
>>
>
>This seems l
On Fri, 12 Dec 2003, Tom Lane wrote:
> Alternative thought: just recommend that if possible, people take
> a filesystem dump of their old PGDATA directory after stopping
> the old postmaster. This would be sufficient for retreating to
> the prior version if needed. It might or might not be slowe
Dave Smith <[EMAIL PROTECTED]> writes:
> Why not go the other way.
> 1) Dump the schemas.
> 2) Initdb with the new schemas in a tmp PGDATA
> 3) backup the schemas in the current PGDATA
> 4) move the new schemas from the new db into the current one.
This seems like approximately the same thing exc
Why not go the other way.
1) Dump the schemas.
2) Initdb with the new schemas in a tmp PGDATA
3) backup the schemas in the current PGDATA
4) move the new schemas from the new db into the current one.
This means that doing an update you would only have to have space for
the system catalogs not t
Matthew T. O'Connor wrote:
>On Fri, 2003-12-12 at 15:42, Tom Lane wrote:
>
>
>>Alternative thought: just recommend that if possible, people take
>>a filesystem dump of their old PGDATA directory after stopping
>>the old postmaster. This would be sufficient for retreating to
>>the prior version
On Fri, 2003-12-12 at 15:42, Tom Lane wrote:
> Alternative thought: just recommend that if possible, people take
> a filesystem dump of their old PGDATA directory after stopping
> the old postmaster. This would be sufficient for retreating to
> the prior version if needed. It might or might not b
"Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> On Fri, 2003-12-12 at 14:51, Andrew Dunstan wrote:
>> Maybe use an option which you would disable on Windows to copy the files
>> instead of hardlinking them.
> I think this would be a good feature even without hard link problems.
> If I am a p
Matthew T. O'Connor wrote:
On Fri, 2003-12-12 at 14:51, Andrew Dunstan wrote:
re Windows: pipes, yes, hard links, no (and no sane symlinks either)
Actually, NTFS does support hard links, there is just no support for it
in any MS file management GUI.
http://msdn.microsoft.com/library/defau
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> Then again, in the case of pg_upgrade, wouldn't just disabling access from
> anywhere except localhost prevent others from getting in?
Not if your normal operating mode includes connections from clients
running locally. I really don't see any clean
On Fri, 12 Dec 2003, Peter Eisentraut wrote:
> Tom Lane wrote:
> > I think it's important to be able to run pg_upgrade with the
> > postmaster shut down. Otherwise there is too much risk that some
> > other user will change the database while we are working. The
> > original pg_upgrade script le
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Instead, all operations should be done through a standalone backend.
> This would also be a nice solution for people who want a standalone,
> server-less database system. But for the purpose of pg_upgrade it
> seems like a lot o
On Fri, 2003-12-12 at 14:51, Andrew Dunstan wrote:
> re Windows: pipes, yes, hard links, no (and no sane symlinks either)
Actually, NTFS does support hard links, there is just no support for it
in any MS file management GUI.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnfile
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> re Windows: pipes, yes, hard links, no (and no sane symlinks either) -
> also of course no (sane) shell - is this going to be a script or a C
> program?
C, certainly.
> Maybe use an option which you would disable on Windows to copy the files
> inste
re Windows: pipes, yes, hard links, no (and no sane symlinks either) -
also of course no (sane) shell - is this going to be a script or a C
program?
Maybe use an option which you would disable on Windows to copy the files
instead of hardlinking them. Yes it would take lots more time and space,
Tom Lane wrote:
> I think it's important to be able to run pg_upgrade with the
> postmaster shut down. Otherwise there is too much risk that some
> other user will change the database while we are working. The
> original pg_upgrade script left it to the DBA to ensure this wouldn't
> happen, but t
I am planning to solve the ancient problem of updating to a new major
version without dump/reload, by means of writing a new, more bulletproof
implementation of Bruce's old pg_upgrade script. Here are some design
notes --- please comment.
The upgrade scenario
I envision the
40 matches
Mail list logo