On Wed, 2008-05-28 at 19:11 -0400, Mike wrote:
>
> Can somebody point to the most logical place in the code to intercept
> the WAL writes? (just a rough direction would be enough)- or if this
> doesn’t make sense at all, another suggestion on where to get the
> data?
I don't think that intercep
On Wed, 2008-05-28 at 22:45 +0100, Simon Riggs wrote:
> On Wed, 2008-05-28 at 16:28 -0400, Tom Lane wrote:
> > Gregory Stark <[EMAIL PROTECTED]> writes:
> > > "Tom Lane" <[EMAIL PROTECTED]> writes:
> > >> This is expected to take lots of memory because each row-requiring-check
> > >> generates an e
On Wed, May 28, 2008 at 7:11 PM, Mike <[EMAIL PROTECTED]> wrote:
>> Can somebody point to the most logical place in the code to intercept the
>> WAL writes? (just a rough direction would be enough)
>
>XLogInsert
>
Great- I'll take a look at that code.
>> or if this doesn't make sense at all, anot
On Thu, 2008-05-29 at 09:57 +0530, Pavan Deolasee wrote:
> On Thu, May 29, 2008 at 2:02 AM, Simon Riggs <[EMAIL PROTECTED]> wrote:
> >
> >
> > I'm not happy that the VACUUM waits. It might wait a very long time and
> > cause worse overall performance than the impact of the second scan.
> >
>
> Le
On Wed, 2008-05-28 at 18:17 -0400, Gregory Stark wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
>
> > AFAICS we must aggregate the trigger checks. We would need a special
> > property of triggers that allowed them to be aggregated when two similar
> > checks arrived. We can then use hash aggr
On Thu, May 29, 2008 at 2:02 AM, Simon Riggs <[EMAIL PROTECTED]> wrote:
>
>
> I'm not happy that the VACUUM waits. It might wait a very long time and
> cause worse overall performance than the impact of the second scan.
>
Lets not get too paranoid about the wait. It's a minor detail in the
whole t
All,
I'm really uncomfortable with just having recursive queries return a
cost of "1000" or some similar approach. That's always been a problem
for SRFs and it looks to be a bigger problem for WR.
However, it doesn't seem like the computer science establishment has
made a lot of headway in
Tom,
I think this patch is plenty complicated enough without adding useless
restrictive options.
+1 for no additonal GUC options.
--Josh Berkus
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsq
And you will have a chance to encounter full page writes, whole page
image, which could be produced during the hot backup and the first
modification to the data page after a checkpoint (if you turn full
page write option "on" by GUC).
2008/5/29 Mike <[EMAIL PROTECTED]>:
> On Wed, May 28, 2008 at 8
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
> Are there any head fixes proposed for it?
It's been fixed in CVS for a month. We just haven't pushed a release yet.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
Tom Lane wrote:
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
Are there any head fixes proposed for it?
It's been fixed in CVS for a month. We just haven't pushed a release yet.
Let me try it out and see what I find out in my EAStress workload.
Regards,
Jignesh
--
Sent via p
Are there any head fixes proposed for it?
I am seeing some scaling problems with EAStress which uses JDBC with
8.3.0 and this one could be the reason why I am seeing some problems.. I
will be happy to try it out and report on it.. The setup is ready right
now if someone can point me to a patch
On Wed, May 28, 2008 at 8:30 PM, Mike <[EMAIL PROTECTED]> wrote:
>> When you say a bit of decoding, is that because the data written to the
logs
>> is after the query parser/planner? Or because it's written in several
>> chunks? Or?
>
>Because that's the actual recovery record. There is no SQL tex
On Wed, 28 May 2008, Darren Reed wrote:
Is it feasible to add the ability to catch exceptions from COPY?
Depends on what you consider feasible. There's a start to a plan for that
on the TODO list: http://www.postgresql.org/docs/faqs.TODO.html but it's
not trivial to implement.
It's also
On Wed, May 28, 2008 at 8:30 PM, Mike <[EMAIL PROTECTED]> wrote:
> When you say a bit of decoding, is that because the data written to the logs
> is after the query parser/planner? Or because it's written in several
> chunks? Or?
Because that's the actual recovery record. There is no SQL text, ju
Is it feasible to add the ability to catch exceptions from COPY?
Darren
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, 2008-05-28 at 19:11 -0400, Mike wrote:
> Can somebody point to the most logical place in the code to intercept
> the WAL writes? (just a rough direction would be enough)- or if this
> doesn’t make sense at all, another suggestion on where to get the
> data? (I’m trying to avoid doing it usi
On Wed, May 28, 2008 at 7:05 PM, Sabbiolina <[EMAIL PROTECTED]> wrote:
> Hello, in my particular case I need to configure Postgres to handle only a
> few concurrent connections, but I need it to be blazingly fast, so I need it
> to cache everything possible. I've changed the config file and multipl
On Wed, May 28, 2008 at 7:11 PM, Mike <[EMAIL PROTECTED]> wrote:
> Can somebody point to the most logical place in the code to intercept the
> WAL writes? (just a rough direction would be enough)
XLogInsert
> or if this doesn't make sense at all, another suggestion on where to get
> the data? (I'
Hello,
I'm new to the core PostgreSQL code, so pardon the question if the answer is
really obvious, and I'm just missing it, but I've got a relatively large web
application that uses PostgreSQL as a back-end database, and we're heavily
using memcached to cache frequently accessed data.
I'm
Simon Riggs wrote:
Hmm, I think the question is: How many hint bits need to be set
before we mark the buffer dirty? (N)
Should it be 1, as it is now? Should it be never? Never is a long
time. As N increases, clog accesses increase. So it would seem there
is likely to be an optimal value for N
Hello, in my particular case I need to configure Postgres to handle only a
few concurrent connections, but I need it to be blazingly fast, so I need it
to cache everything possible. I've changed the config file and multiplied
all memory-related values by 10, still Postgres uses only less than 50 Mb
Gregory Stark <[EMAIL PROTECTED]> writes:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
>> We certainly need a TODO item for "improve RI checks during bulk
>> operations".
> I have a feeling it's already there. Hm. There's a whole section on RI
> triggers but the closest I see is this, neither of th
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> AFAICS we must aggregate the trigger checks. We would need a special
> property of triggers that allowed them to be aggregated when two similar
> checks arrived. We can then use hash aggregation to accumulate them. We
> might conceivably need to spill t
On Wed, 2008-05-28 at 16:28 -0400, Tom Lane wrote:
> Gregory Stark <[EMAIL PROTECTED]> writes:
> > "Tom Lane" <[EMAIL PROTECTED]> writes:
> >> This is expected to take lots of memory because each row-requiring-check
> >> generates an entry in the pending trigger event list.
>
> > Hm, it occurs to
On Wed, 2008-05-28 at 16:55 -0400, Gregory Stark wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
>
> > So the idea is to have one pass per VACUUM, but make that one pass do
> > the first pass of *this* VACUUM and the second pass of the *last*
> > VACUUM.
>
> I think that's exactly the same as
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> So the idea is to have one pass per VACUUM, but make that one pass do
> the first pass of *this* VACUUM and the second pass of the *last*
> VACUUM.
I think that's exactly the same as the original suggestion of having HOT
pruning do the second pass of th
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Gregory Stark <[EMAIL PROTECTED]> writes:
>> "Tom Lane" <[EMAIL PROTECTED]> writes:
>>> This is expected to take lots of memory because each row-requiring-check
>>> generates an entry in the pending trigger event list.
>
>> Hm, it occurs to me that we could
On Wed, 2008-05-28 at 16:56 +0530, Pavan Deolasee wrote:
> 2. It then waits for all the existing transactions to finish to make
> sure that everyone can see the change in the pg_class row
I'm not happy that the VACUUM waits. It might wait a very long time and
cause worse overall performance than
Gregory Stark <[EMAIL PROTECTED]> writes:
> "Tom Lane" <[EMAIL PROTECTED]> writes:
>> This is expected to take lots of memory because each row-requiring-check
>> generates an entry in the pending trigger event list.
> Hm, it occurs to me that we could still do a join against the pending event
> tr
2008/5/27 Zdenek Kotala <[EMAIL PROTECTED]>:
> Coutinho napsal(a):
>>
>> this is listed on TODO:
>> http://www.postgresql.org/docs/faqs.TODO.html
>>
>> Add features of Oracle-style packages (Pavel)
>>
My last idea was only global variables for plpgsql. It needs hack of
plpgsql :(. But it's can be
[moving to -hackers]
"Tom Lane" <[EMAIL PROTECTED]> writes:
> "Tomasz Rybak" <[EMAIL PROTECTED]> writes:
>> I tried to use COPY to import 27M rows to table:
>> CREATE TABLE sputnik.ccc24 (
>> station CHARACTER(4) NOT NULL REFERENCES sputnik.station24 (id),
>> moment INTEGER NOT N
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> 1. Before VACUUM starts, it updates the pg_class row of the target
> table, noting that VACUUM_IN_PROGRESS for the target table.
If I understand correctly nobody would be able to re-use any line-pointers
when a vacuum is in progress? I find that a bi
Magnus Hagander <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Yeah: LOG level sorts differently in the two cases; it's fairly high
>> priority for server log output and much lower for client output.
> Ok, easy fix if we break them apart. Should we continue to accept
> values that we're not goin
Tommy Gildseth <[EMAIL PROTECTED]> writes:
> One obvious disadvantage of this approach, is that I need to connect and
> disconnect in every function. A possible solution to this, would be
> having a function f.ex dblink_exists('connection_name') that returns
> true/false depending on whether the
Jorgen Austvik - Sun Norway wrote:
Hi.
pg_regress has a --dbname option (which actually take a list of
database names):
--dbname=DB use database DB (default \"regression\")
... but the PostgreSQL regression test suite does not really support
this:
[EMAIL PROTECTED]:regre
Jorgen Austvik - Sun Norway <[EMAIL PROTECTED]> writes:
> pg_regress has a --dbname option (which actually take a list of database
> names):
>--dbname=DB use database DB (default \"regression\")
> ... but the PostgreSQL regression test suite does not really support this:
That
Jorgen Austvik - Sun Norway <[EMAIL PROTECTED]> writes:
> we would like to be able to use and ship pg_regress and the PostgreSQL
> test suite independently of the PostgreSQL build environment, for
> testing and maybe even as a separate package to be build and shipped
> with the OS for others to
Dave Cramer <[EMAIL PROTECTED]> writes:
> On 23-May-08, at 9:20 AM, Tom Lane wrote:
>> There was some discussion a week or so back about scheduling a set of
>> releases in early June, but it's not formally decided.
> Now that PGCon is over has there been any more discussion ?
Yeah, I just posted
Hi.
pg_regress has a --dbname option (which actually take a list of database
names):
--dbname=DB use database DB (default \"regression\")
... but the PostgreSQL regression test suite does not really support this:
[EMAIL PROTECTED]:regress] ggrep -R "regression" sql/* | grep -
Yup, we're overdue for that, so:
After some discussion among core and the packagers list, we have
tentatively set June 9 as the release date for minor updates of
all supported PG release branches (back to 7.4). As has been the
recent practice, code freeze will occur the preceding Thursday, June 5
Hi,
we would like to be able to use and ship pg_regress and the PostgreSQL
test suite independently of the PostgreSQL build environment, for
testing and maybe even as a separate package to be build and shipped
with the OS for others to test their setup. Does this sound like a sane
and OK thin
On 23-May-08, at 9:20 AM, Tom Lane wrote:
Dave Cramer <[EMAIL PROTECTED]> writes:
Any word on 8.3.2 ?
Obviously, nothing is happening during PGCon ;-)
There was some discussion a week or so back about scheduling a set of
releases in early June, but it's not formally decided.
Now that PGC
On Wed, 2008-05-28 at 06:08 -0400, Gregory Stark wrote:
> "Tom Lane" <[EMAIL PROTECTED]> writes:
>
> > (Although that argument might not hold water for a bulk seqscan: you'll
> > have hinted all the tuples and then very possibly throw the page away
> > immediately.
>
> That seems like precisel
Tom brought this up during the PGCon developer meet. After thinking a
bit about it, I think it's actually possible to avoid the second heap
scan, especially now that we've HOT. If we can remove the second pass,
not only would that speed up vacuum, but also reduce lots of redundant
read and write IO
"Tom Lane" <[EMAIL PROTECTED]> writes:
> (Although that argument might not hold water for a bulk seqscan: you'll
> have hinted all the tuples and then very possibly throw the page away
> immediately.
That seems like precisely the case where we don't want to dirty the buffer.
> So counting the
Tom Lane wrote:
> Magnus Hagander <[EMAIL PROTECTED]> writes:
> >> One point of interest is that for client_min_messages and
> >> log_min_messages, the ordering of the values has significance, and
> >> it's different for the two cases.
>
> > Is there any actual reason why they're supposed to be tr
Alex Hunsaker wrote:
> On Tue, May 27, 2008 at 12:05 PM, Magnus Hagander
> <[EMAIL PROTECTED]> wrote:
> > Alex Hunsaker wrote:
> >> On Tue, May 27, 2008 at 10:20 AM, Tom Lane <[EMAIL PROTECTED]>
> >> wrote:
> >> > I am wondering if it's a good idea to hide the redundant entries
> >> > to reduce clu
I have locked down access to all dblink_* functions, so that only
certain privileged users have access to them, and instead provide a set
of SRF functions defined as security definer functions, where I connect
to the remote server, fetch some data, disconnect from remote server,
and return the
On Tue, 2008-05-27 at 19:32 -0400, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > My proposal is to have this as a two-stage process. When we set the hint
> > on a tuple in a clean buffer we mark it BM_DIRTY_HINTONLY, if not
> > already dirty. If we set a hint on a buffer that is BM
50 matches
Mail list logo