One thing i'd really like to be in this common object info catalog is DDL which
created or altered the referenced object.
If we additionally could make it possible to have ordinary triggers on this
catalog it would solve most logical DDL replication problems
Hannu
Sent from Samsung Galaxy N
On Tue, 2013-01-08 at 17:17 -0500, Stephen Frost wrote:
> Seriously tho, the argument for not putting these things into the
> various individual catalogs is that they'd create bloat and these
> items
> don't need to be performant. I would think that the kind of
> timestamps
> that we're talking ab
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
> 2013/1/8 Peter Eisentraut :
> > On 1/5/13 11:04 AM, Stephen Frost wrote:
> > Yeah, actually, the other day I was thinking we should get rid of all
> > the system catalogs and use a big EAV-like schema instead. We're not
> > getting any relational-
2013/1/8 Peter Eisentraut :
> On 1/5/13 11:04 AM, Stephen Frost wrote:
>> Creating a separate catalog (or two) every time we want to track XYZ for
>> all objects is rather overkill... Thinking about this a bit more, and
>> noting that pg_description/shdescription more-or-less already exist as a
>>
On 1/5/13 11:04 AM, Stephen Frost wrote:
> Creating a separate catalog (or two) every time we want to track XYZ for
> all objects is rather overkill... Thinking about this a bit more, and
> noting that pg_description/shdescription more-or-less already exist as a
> framework for tracking 'something
On Sat, Jan 5, 2013 at 11:04 AM, Stephen Frost wrote:
> * Fabrízio de Royes Mello (fabriziome...@gmail.com) wrote:
>> * also we discuss about create two new catalogs, one local and another
>> shared (like pg_description and pg_shdescription) to track creation times
>> of all database objects.
>
>
On Fri, Jan 4, 2013 at 1:07 PM, Peter Eisentraut wrote:
> On 1/3/13 3:26 PM, Robert Haas wrote:
>> It's true, as we've often
>> said here, that leveraging the OS facilities means that we get the
>> benefit of improving OS facilities "for free" - but it also means that
>> we never exceed what the O
* Fabrízio de Royes Mello (fabriziome...@gmail.com) wrote:
> Understood... a "COMMENT" is a database object, then if we add a creation
> time column to pg_description/shdescription tables how we track his
> creation time?
When it's NULL it "doesn't exist", in this case, when it transistions
from N
* Stephen Frost wrote:
>
> Yes, and have the actual 'description' field (as it's variable) at the
> end of the catalog.
>
> Regarding the semantics of it- I was thinking about how directories and
> unix files work. Basically, adding or removing a sub-object would
> update the alter time on the ob
* Fabrízio de Royes Mello (fabriziome...@gmail.com) wrote:
> But those tables are filled only when we execute COMMENT ON statement...
> then your idea is create a 'null' comment every time we create a single
> object... is it?
Yes, and have the actual 'description' field (as it's variable) at the
* Stephen Frost wrote:
>
> * Fabrízio de Royes Mello (fabriziome...@gmail.com) wrote:
> > * also we discuss about create two new catalogs, one local and another
> > shared (like pg_description and pg_shdescription) to track creation
times
> > of all database objects.
>
> Creating a separate catalo
* Fabrízio de Royes Mello (fabriziome...@gmail.com) wrote:
> * also we discuss about create two new catalogs, one local and another
> shared (like pg_description and pg_shdescription) to track creation times
> of all database objects.
Creating a separate catalog (or two) every time we want to trac
On Fri, Jan 4, 2013 at 4:07 PM, Peter Eisentraut wrote:
> On 1/3/13 3:26 PM, Robert Haas wrote:
> > It's true, as we've often
> > said here, that leveraging the OS facilities means that we get the
> > benefit of improving OS facilities "for free" - but it also means that
> > we never exceed what
On 1/3/13 3:26 PM, Robert Haas wrote:
> It's true, as we've often
> said here, that leveraging the OS facilities means that we get the
> benefit of improving OS facilities "for free" - but it also means that
> we never exceed what the OS facilities are able to provide.
And that should be the decid
On 01/03/2013 02:30 PM, Kevin Grittner wrote:
Andrew Dunstan wrote:
I don't especially have a horse in the race, but ISTM that if you want
the information you want it to be able to persist across dump/restore,
at least optionally. If you can happily lose it when you're forced to
recover using
Andrew Dunstan wrote:
> I don't especially have a horse in the race, but ISTM that if you want
> the information you want it to be able to persist across dump/restore,
> at least optionally. If you can happily lose it when you're forced to
> recover using a logical dump then it's not that impor
On 01/03/2013 04:51 PM, Kevin Grittner wrote:
Robert Haas wrote:
Christopher Browne wrote:
these timestamps Should Not be captured or carried forward by
pg_dump.
If we put a creation time into pg_database or pg_class, then
streaming replication will, as a "physical" replication
mechanism, car
Robert Haas wrote:
> Christopher Browne wrote:
>> these timestamps Should Not be captured or carried forward by
>> pg_dump.
>> If we put a creation time into pg_database or pg_class, then
>> streaming replication will, as a "physical" replication
>> mechanism, carry the timestamp forward into re
On Thu, Jan 3, 2013 at 12:54 PM, Christopher Browne wrote:
> Yep, and I think that the behaviour of tar pretty nicely characterizes
> what's troublesome here. It is quite likely that a tar run will *capture*
> the creation time of a file, but if you pull data from a tar archive, it is
> by no mea
On Thu, Jan 3, 2013 at 12:27 PM, Robert Haas wrote:
> On Thu, Jan 3, 2013 at 11:15 AM, Hannu Krosing wrote:
>> This is what I did with my sample pl/python function ;)
>
> Yeah, except that the "c" in "ctime" does not stand for create, and
> therefore the function isn't necessarily reliable. The
On Thu, Jan 3, 2013 at 11:15 AM, Hannu Krosing wrote:
> This is what I did with my sample pl/python function ;)
Yeah, except that the "c" in "ctime" does not stand for create, and
therefore the function isn't necessarily reliable. The problem is
even worse for tables, where a rewrite may remove
On 01/03/2013 03:09 PM, Robert Haas wrote:
On Thu, Jan 3, 2013 at 8:46 AM, Hannu Krosing wrote:
How is "what does database creation date mean?" a different question ?
It is same question as :
what is the creation date of db when I create a replica of my database from
backup?
does it depend o
On 1/3/13 6:34 AM, Hannu Krosing wrote:
>>> If what you want is something close to current unix file time semantics
>>> (ctime, mtime, atime) then why not just create a function to look up
>>> these
>>> attributes on database directory and/or database files ?
>> Because too many things change those
On Thu, Jan 3, 2013 at 8:46 AM, Hannu Krosing wrote:
> How is "what does database creation date mean?" a different question ?
>
> It is same question as :
>
> what is the creation date of db when I create a replica of my database from
> backup?
>
> does it depend on how I restore my replica ?
>
>
On 01/03/2013 02:42 PM, Stephen Frost wrote:
* Hannu Krosing (ha...@krosing.net) wrote:
But then some customer comes and wants it to mean "when was this
replica database created" ?
That's an entirely different question, imv, than what we're talking
about.
I'm not saying that it won't be asked,
* Hannu Krosing (ha...@krosing.net) wrote:
> But then some customer comes and wants it to mean "when was this
> replica database created" ?
That's an entirely different question, imv, than what we're talking
about.
I'm not saying that it won't be asked, but as it's a different question,
we can lo
On 01/03/2013 02:17 PM, Stephen Frost wrote:
* Hannu Krosing (ha...@krosing.net) wrote:
Can't we actually fix these to preserve file creation date like tar
does and still keep
unix file semantics ?
I'm not sure that I really see the advantage to trying to use the
filesystem to keep this informa
* Hannu Krosing (ha...@krosing.net) wrote:
> Can't we actually fix these to preserve file creation date like tar
> does and still keep
> unix file semantics ?
I'm not sure that I really see the advantage to trying to use the
filesystem to keep this information for us..?
> So it is as about agreei
On 01/03/2013 11:18 AM, Andres Freund wrote:
On 2013-01-03 11:03:17 +0100, Hannu Krosing wrote:
On 12/28/2012 03:14 AM, Stephen Frost wrote:
...
I agree that what I was suggesting would be possible to implement with
event triggers, but I see that as a rather advanced feature that most
users are
On 2013-01-03 11:03:17 +0100, Hannu Krosing wrote:
> On 12/28/2012 03:14 AM, Stephen Frost wrote:
> ...
> >I agree that what I was suggesting would be possible to implement with
> >event triggers, but I see that as a rather advanced feature that most
> >users aren't going to understand or implement
* Robert Haas (robertmh...@gmail.com) wrote:
> On Sat, Dec 29, 2012 at 10:26 AM, Andres Freund
> wrote:
> > A shared table for event triggers sounds like it would be the far easier
> > solution (9.4+ that is).
>
> The problem is that the event trigger table is a just a pointer to a
> function, a
On Sat, Dec 29, 2012 at 10:26 AM, Andres Freund wrote:
> A shared table for event triggers sounds like it would be the far easier
> solution (9.4+ that is).
The problem is that the event trigger table is a just a pointer to a
function, and there's no procedure OID to store in that shared catalog
* Andres Freund (and...@2ndquadrant.com) wrote:
> I don't think autonomous transactions are the biggest worry
> here. Transactions essentially already span multiple databases, so thats
> not really a problem in this context. Making it possible to change
> catalogs while still being active in anothe
On 2012-12-29 09:59:49 -0500, Stephen Frost wrote:
> * Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
> > It sounds to me like either autonomous transaction with the capability
> > to run the independant transaction in another database, or some dblink
> > creative use case. Another approach would
34 matches
Mail list logo