Regarding: Optimizer To Do Item: Have EXPLAIN ANALYZE issue NOTICE messages
when the estimated and actual row counts differ by a specified percentage.
Hi,
After going through the thread related to the above mentioned to do Item, my
understanding is that:
Issuing notices when a problematic no
Tom Lane wrote:
I notice that the recent patch to enforce finding a tclsh broke a couple
of buildfarm machines. On reflection, I remember that recent Tcl
versions don't automatically create a 'tclsh' symlink, only a
version-numbered program such as 'tclsh8.3'. I suggest that maybe we
ought to
I notice that the recent patch to enforce finding a tclsh broke a couple
of buildfarm machines. On reflection, I remember that recent Tcl
versions don't automatically create a 'tclsh' symlink, only a
version-numbered program such as 'tclsh8.3'. I suggest that maybe we
ought to allow that without
Dean Rasheed <[EMAIL PROTECTED]> writes:
> This new version fixes that and also includes a little patch to psql so that
> it
> ignores any backend notices during tab-completion, which otherwise just get
> in the way. Trace during tab-completion still goes to the server log, if
> enabled,
> since
Zdenek Kotala <[EMAIL PROTECTED]> writes:
> I performed review and I prepared own patch which contains only probes
> without any issue. I suggest commit this patch because the rest of
> patch is independent and it can be committed next commit fest after
> rework.
I looked at this patch a little bi
On Thu, 24 Jul 2008, Greg Sabino Mullane wrote:
Bite the bullet and start showing the buffer settings as a pure number of bytes
everywhere, and get rid of the confusing '8kB' unit in pg_settings?
There's already some changes needed in this area needed to execute the
full GUC cleanup/wizard pl
Here's a patch implementing the TODO item "Add a separate TRUNCATE
permission". Hopefully I found all the bits that needed to be
modified to make this work.
Any feedback appreciated.
Thanks,
...Robert
Index: doc/src/sgml/user-manag.sgml
==
At David's request I've been looking through this patch.
Regarding documentation: if it would help, I can write some; I have
already made a start on writing down what is going on internally in
order to understand it myself.
I've found three more bugs so far:
1)
create view v2(id) as values (1);
Joshua D. Drake wrote:
It seems to be that quite a bit of pg_dumps functionality could be
pushed into PostgreSQL as functions. This would end up defining an API
on its own.
pg_dump the executable would just be a shell that calls the functions
in appropriate order.
[snip]
There could be a
Andrew Gierth <[EMAIL PROTECTED]> writes:
> However, tracing through the code suggests that neither ExecInsert not
> intorel_receive will modify a passed raw tuple - ExecInsert calls
> ExecMaterializeSlot before heap_insert, and intorel_receive calls
> ExecCopySlotTuple before heap_insert.
> So is
Stephen Frost wrote:
* David Fetter ([EMAIL PROTECTED]) wrote:
This subject keeps coming up, then back down, etc.
That also got me to thinking about the "pgscript" type of idea, and
about wildcards for commands, and being able to loop through objects in
a scriptable way that's not a really ug
David Fetter <[EMAIL PROTECTED]> writes:
> What would a libpgdump API look like?
Hmm. Start with requirements:
* Ability to enumerate the objects in a database
* Ability to fetch the "properties" of individual objects
(SQL definition is only one property, eg. pg_dump considers
owner, schema, AC
Hi Ryan,
I agree, I have had applications use uint types to avoid using
a larger data type. I have actually had to patch an application
developed for MySQL uint8 to signed int8 on PostgreSQL. In that
case, the only operations that were performed where assignment
and lookup. If we need to use the n
While trying to construct testcases for a patch, I ran into this:
execAmi.c has a function ExecMayReturnRawTuples which indicates whether
a given plan might return tuples that come straight from a table rather
than having been projected.
InitPlan() uses this to force the addition of a junk filter
* David Fetter ([EMAIL PROTECTED]) wrote:
> This subject keeps coming up, then back down, etc.
>
> What would a libpgdump API look like?
Honestly, when I was thinking about the "-w" command and whatnot, my
first reaction was "gee, it'd be nice to be able to dump the schema
using a \copy schema or
Folks,
This subject keeps coming up, then back down, etc.
What would a libpgdump API look like?
Cheers,
David.
--
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: [EMAIL PROTECTED]
Remember to vote!
Consi
Simon Riggs wrote:
If we wish to protect pg_dump's role, then lets have another utility or
some packaging that can be used for its other hidden roles. That sounds
like we might all agree on that. pg_dev_dump? How should it look?
Actually, if we libraryise pg_dump and add some correspondin
On Sat, 2008-07-26 at 09:08 -0400, Andrew Dunstan wrote:
> So, IMNSHO, making a full database backup is still pg_dump's principal
> function.
Making copies for development databases is also a common use case, and
if not more common than backups, at least not far behind. This was my
stated use c
Simon Riggs wrote:
In a world
where PITR exists the role and importance of pg_dump has waned
considerably. What *is* its principal function? Does it have just one?
I think that's probably a rather narrow perspective.
PITR doesn't work across versions or architectures or OSes. And if
yo
On Sat, 2008-07-26 at 07:47 -0400, Andrew Dunstan wrote:
>
> Simon Riggs wrote:
> > As a dev tool it makes sense.
> >
> I think we have yet another case for moving the core bits of pg_dump
> into a library that can then be used by lots of clients. Until we do
> that we're going to get continu
Simon Riggs wrote:
On Fri, 2008-07-25 at 12:38 -0700, Joshua D. Drake wrote:
Gained. Code complexity.
Hardly, patch is very small. I would recognise that as a factor
otherwise.
What I see is a recipe for inconsistent, un-restorable backups without a
user realizing what they hav
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Gregory Stark <[EMAIL PROTECTED]> writes:
>> "Manoel Henrique" <[EMAIL PROTECTED]> writes:
>>> Yes, I'm relying on the assumption that backwards scan has the same cost as
>>> forward scan, why shouldn't it?
>
>> Because hard drives only spin one direction
>
On Fri, 2008-07-25 at 12:38 -0700, Joshua D. Drake wrote:
> Gained. Code complexity.
Hardly, patch is very small. I would recognise that as a factor
otherwise.
> What I see is a recipe for inconsistent, un-restorable backups without a
> user realizing what they have done.
I agree on the backu
On Sat, 2008-07-26 at 10:17 +0200, Markus Wanner wrote:
> What I still don't understand is, why you are speaking about
> "logical"
> replication. It rather sounds like an ordinary log shipping approach,
> where the complete WAL is sent over the wire. Nothing wrong with
> that,
> it certainly fi
On Sat, 2008-07-26 at 10:17 +0200, Markus Wanner wrote:
>
> > Expensive as in we need to parse and handle each statement
> separately.
> > If we have a single parameter then much lower overhead.
>
> Is that really much of a concern when otherwise caring about network
> and i/o latency?
I belie
Hi,
Simon Riggs wrote:
There is no sync() during WAL apply when each individual transaction
hits commit. This is because there is "no WAL" i.e. changes comes from
WAL to the database, so we have no need of a second WAL to protect the
changes being made.
Aha, that makes much more sense to me no
26 matches
Mail list logo