Josh Berkus wrote:
>> That is exactly the core idea I was trying to suggest in my rambling
>> message. Just that small additional bit of information transmitted and
>> published to the master via that route, and it's possible to optimize
>> this problem in a way not available now. And it's a way
Greg Stark wrote:
> On Fri, Feb 26, 2010 at 9:19 PM, Tom Lane wrote:
>> There's *definitely* not going to be enough information in the WAL
>> stream coming from a master that doesn't think it has HS slaves.
>> We can't afford to record all that extra stuff in installations for
>> which it's just u
Tom,
I just took the patch, but it seems to be in binary format. Can you send
me the patch to me?
Thanks,
Gokul.
On Sat, May 30, 2009 at 3:12 AM, Tom Lane wrote:
> Josh Berkus writes:
> > Tom,
> >> Is anyone interested enough to try it if I code it?
>
> > If you're patient for results, sur
I am just adding my two cents, please ignore it, if its totally irrelevant.
While we do performance testing/tuning of any applications, the important
things, a standard monitoring requirement from a database are
a) Different type of wait events and the time spent in each of them
b) Top ten Queries
Jaime Casanova writes:
> On Fri, Feb 26, 2010 at 7:12 PM, Michael Glaesemann
> wrote:
>> In any event, I couldn't get your example to work on Postgres 8.4 regardless
>> due to the varchar2 type. Which version of Postgres are you using?
>>
>> test=# CREATE TABLE footable(id int4, name varchar2(1
On Fri, Feb 26, 2010 at 7:12 PM, Michael Glaesemann
wrote:
>
> In any event, I couldn't get your example to work on Postgres 8.4 regardless
> due to the varchar2 type. Which version of Postgres are you using?
>
> test=# CREATE TABLE footable(id int4, name varchar2(10));
> ERROR: type "varchar2" d
>
>
> Actually Tom, i am not able to understand that completely. But what you are
> saying that in the current scenario, when there is a broken data type based
> index, then it will return no results, but never will return wrong results.
> So never the update will corrupt the heap data. But i take
Bruce Momjian wrote:
What happened to this patch?
Returned with feedback in October after receiving a lot of review, no
updated version submitted since then:
https://commitfest.postgresql.org/action/patch_view?id=98
--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Servic
Aidan Van Dyk wrote:
Would we (ya, the royal we) be willing to say that if you want the
benifit of removing the MVCC overhead of long-running queries you need
to run PITR backup/archive recovery, and if you want SR, you get a
closed-loop master-follows-save-xmin behaviour?
To turn that quest
* Greg Smith [100226 23:39]:
> Just not having the actual query running on the master is such a
> reduction in damage that I think it's delivering the essence of what
> people are looking for regardless. That it might be possible in some
> cases to additionally avoid the overhead that come
Greg Stark wrote:
But if they move from having a plain old PITR warm standby to having
one they can run queries on they might well assume that the big
advantage of having the standby to play with is precisely that they
can do things there that they have never been able to do on the master
previou
Joshua D. Drake wrote:
On Sat, 27 Feb 2010 00:43:48 +, Greg Stark wrote:
I want my ability to run large batch queries without any performance
or reliability impact on the primary server.
+1
I can use any number of other technologies for high availability.
Remove "must be an
Buildfarm member caracara has been failing the last few days because of
this:
LOG: could not bind socket for statistics collector: Cannot assign requested
address
LOG: disabling statistics collector for lack of working socket
That code hasn't changed recently, AFAIK, so I'm thinking something'
On Fri, Feb 26, 2010 at 9:44 PM, Tom Lane wrote:
> Greg Stark writes:
>
>> What extra entries?
>
> Locks, just for starters. I haven't read enough of the code yet to know
> what else Simon added. In the past it's not been necessary to record
> any transient information in WAL, but now we'll hav
On Sat, 27 Feb 2010 00:43:48 +, Greg Stark wrote:
> On Fri, Feb 26, 2010 at 11:56 PM, Greg Smith
wrote:
>> This is also the reason why the whole "pause recovery" idea is a
>> fruitless
>> path to wander down. The whole point of this feature is that people
>> have a
>> secondary server availa
>
> On Feb 26, 2010, at 0:55 , Дмитрий Фефелов wrote:
>
> > http://developer.postgresql.org/pgdocs/postgres/release-9-0.html
> >
> > Performance section:
> >
> >> Simplify the forms foo <> true and foo <> false to foo = false and
> >> foo = true during query optimization.
> >
> > Will it work cor
On Sat, Feb 27, 2010 at 2:43 AM, Greg Smith wrote:
>
> But if you're running the 8 hour report on the master right now, aren't you
> already exposed to a similar pile of bloat issues while it's going? If I
> have the choice between "sometimes queries will get canceled" vs. "sometimes
> the master
Alvaro Herrera wrote:
> Tom Lane wrote:
> > Alvaro Herrera writes:
> > > Tom Lane wrote:
> > >> It looks to me like the code in AlterSetting() will allow an ordinary
> > >> user to blow away all settings for himself. Even those that are for
> > >> SUSET variables and were presumably set for him b
What happened to this patch?
---
Mark Kirkwood wrote:
> Where I work they make extensive use of Postgresql. One of the things
> they typically want to know about are lock waits. Out of the box in
> there is not much in the
Tom Lane wrote:
> Bruce Momjian writes:
> > Whatever happened to this patch?
>
> I think we bounced it on the grounds that it would represent a
> fundamental change in plpgsql behavior and break a whole lot of
> applications. People have been relying on plpgsql's coerce-via-IO
> assignment behav
Bruce Momjian writes:
> Whatever happened to this patch?
I think we bounced it on the grounds that it would represent a
fundamental change in plpgsql behavior and break a whole lot of
applications. People have been relying on plpgsql's coerce-via-IO
assignment behavior for ten years. If you pre
Michael Meskes wrote:
> On Fri, May 01, 2009 at 03:49:47PM +0300, Heikki Linnakangas wrote:
> > ECPG constructs internal struct names for VARCHAR fields using the field
> > name and line number it's defined on. In a contrived example, though,
> > that's not unique. Consider the following exampl
Bruce Momjian writes:
> I don't see this as every having been applied. What should we do with
> it?
I believe we decided that there wasn't any measurable win.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to you
Whatever happened to this patch?
---
Nikhil Sontakke wrote:
> Hi,
>
> wrote:
>
> > The following plpgsql function errors out with cvs head:
> >
> > CREATE function test_assign() returns void
> > AS
> > $$ declare x int;
>
Greg Stark wrote:
Eh? That's not what I meant at all. Actually it's kind of the exact
opposite of what I meant.
Sorry about that--I think we just hit one of those language usage drift
bits of confusion. "Sit in the corner" has a very negative tone to it
in US English and I interpreted you
I don't see this as every having been applied. What should we do with
it?
---
Tom Lane wrote:
> Josh Berkus writes:
> > Tom,
> >> Is anyone interested enough to try it if I code it?
>
> > If you're patient for results, su
Did this ever get applied/resolved?
---
Robert Haas wrote:
> I've been doing some benchmarking and profiling on the PostgreSQL
> query analyzer, and it seems that (at least for the sorts of queries
> that I typically run) th
Josh Berkus writes:
> There were a flurry of patches around this from Stark and Aster Data, so
> I'm checking if I should be testing on 9.0 or adding this to the TODO list.
> The problem I'm grappling with is that OUTER JOINS against the master in
> a partitioned table (joining to the append node
Greg Smith wrote:
> Heikki Linnakangas wrote:
> > One such landmine is that the keepalives need to flow from client to
> > server while the WAL records are flowing from server to client. We'll
> > have to crack that problem for synchronous replication too, but I think
> > that alone is a big enough
Pavan Deolasee wrote:
> On Fri, Feb 26, 2010 at 8:19 AM, Bruce Momjian wrote:
>
> >
> > Whatever happened to this? It was in the first 9.0 commitfest but was
> > returned with feedback but never updated:
> >
> >
> Though Alex did some useful tests and review, and in fact confirmed that the
> VAC
Heikki Linnakangas wrote:
One such landmine is that the keepalives need to flow from client to
server while the WAL records are flowing from server to client. We'll
have to crack that problem for synchronous replication too, but I think
that alone is a big enough problem to make this 9.1 material
On Feb 26, 2010, at 21:03 , Tom Lane wrote:
Michael Glaesemann writes:
On Feb 26, 2010, at 3:30 , Piyush Newe wrote:
SELECT (footable.*).foofunc FROM footable;
ERROR: column footable.foofunc does not exist
Is that calling syntax correct? I'd think it should be:
SELECT foofunc(footable.*
Bruce Momjian wrote:
Well, I think the choice is either you delay vacuum on the master for 8
hours or pile up 8 hours of WAL files on the slave, and delay
application, and make recovery much slower. It is not clear to me which
option a user would prefer because the bloat on the master might be
p
On Sat, Feb 27, 2010 at 1:53 AM, Greg Smith wrote:
> Greg Stark wrote:
>>
>> Well you can go sit in the same corner as Simon with your high
>> availability servers.
>>
>> I want my ability to run large batch queries without any performance
>> or reliability impact on the primary server.
>>
>
> Tha
Michael Glaesemann writes:
> On Feb 26, 2010, at 3:30 , Piyush Newe wrote:
>> SELECT (footable.*).foofunc FROM footable;
>> ERROR: column footable.foofunc does not exist
> Is that calling syntax correct? I'd think it should be:
> SELECT foofunc(footable.*, 10) FROM footable;
He's relying on th
Greg Stark wrote:
Well you can go sit in the same corner as Simon with your high
availability servers.
I want my ability to run large batch queries without any performance
or reliability impact on the primary server.
Thank you for combining a small personal attack with a selfish
commentary
Greg Smith wrote:
> You can think of the idea of passing an xmin back from the standby as
> being like an auto-tuning vacuum_defer_cleanup_age. It's 0 when no
> standby queries are running, but grows in size to match longer ones. And
> you don't have to have to know anything to set it correctly;
On 02/26/2010 07:03 PM, Tom Lane wrote:
Robert Haas writes:
Basically, what I really want here is some kind of keyword or other
syntax that I can stick into a PL/pgsql query that requests a replan
on every execution.
Wouldn't it be better if it just did the right thing automatically?
On Fri, Feb 26, 2010 at 11:56 PM, Greg Smith wrote:
> This is also the reason why the whole "pause recovery" idea is a fruitless
> path to wander down. The whole point of this feature is that people have a
> secondary server available for high-availability, *first and foremost*, but
> they'd like
Greg Smith wrote:
> Bruce Momjian wrote:
> > Doesn't the system already adjust the delay based on the length of slave
> > transactions, e.g. max_standby_delay. It seems there is no need for a
> > user switch --- just max_standby_delay really high.
> >
>
> The first issue is that you're basical
On Feb 26, 2010, at 3:30 , Piyush Newe wrote:
Hi,
Consider following testcase,
CREATE TABLE footable(id int4, name varchar2(10));
CREATE FUNCTION foofunc(a footable, b integer DEFAULT 10)
RETURNS integer AS $$ SELECT 123; $$ LANGUAGE SQL;
CREATE FUNCTION foofunc(a footable, b numeric DEFAU
On Feb 26, 2010, at 0:55 , Дмитрий Фефелов wrote:
http://developer.postgresql.org/pgdocs/postgres/release-9-0.html
Performance section:
Simplify the forms foo <> true and foo <> false to foo = false and
foo = true during query optimization.
Will it work correct;ly when foo is NULL?
It sh
> That is exactly the core idea I was trying to suggest in my rambling
> message. Just that small additional bit of information transmitted and
> published to the master via that route, and it's possible to optimize
> this problem in a way not available now. And it's a way that I believe
> will
Bruce Momjian wrote:
5 Early cleanup of data still visible to the current query's
snapshot
#5 could be handled by using vacuum_defer_cleanup_age on the master.
Why is vacuum_defer_cleanup_age not listed in postgresql.conf?
I noticed that myself and fired off a corr
Robert Haas writes:
> Basically, what I really want here is some kind of keyword or other
> syntax that I can stick into a PL/pgsql query that requests a replan
> on every execution.
Wouldn't it be better if it just did the right thing automatically?
The sort of heuristic I'm envisioning would e
Bruce Momjian wrote:
Doesn't the system already adjust the delay based on the length of slave
transactions, e.g. max_standby_delay. It seems there is no need for a
user switch --- just max_standby_delay really high.
The first issue is that you're basically saying "I don't care about high
a
> Wait a minute. Bingo So for unique checks we are already going to
> index from Heap. So it is the same thing i am doing with Thick index. So if
> we can trust our current unique checks, then we should trust the Thick
> index.
>
> Thanks Tom!!! for having this good conversation
>
> I thi
Tom Lane wrote:
I don't see a "substantial additional burden" there. What I would
imagine is needed is that the slave transmits a single number back
--- its current oldest xmin --- and the walsender process publishes
that number as its transaction xmin in its PGPROC entry on the master.
Tha
There were a flurry of patches around this from Stark and Aster Data, so
I'm checking if I should be testing on 9.0 or adding this to the TODO list.
The problem I'm grappling with is that OUTER JOINS against the master in
a partitioned table (joining to the append node) gives a row estimate
which
On Fri, Feb 26, 2010 at 12:01 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Fri, Feb 26, 2010 at 11:27 AM, Tom Lane wrote:
>>> Also, I think there is a lot of confusion here over two different
>>> issues: generic plan versus parameter-specific plan, and bad planner
>>> estimates leading to a w
Dimitri Fontaine wrote:
> Bruce Momjian writes:
> > Doesn't the system already adjust the delay based on the length of slave
> > transactions, e.g. max_standby_delay. It seems there is no need for a
> > user switch --- just max_standby_delay really high.
>
> Well that GUC looks like it allows to
Bruce Momjian writes:
> Doesn't the system already adjust the delay based on the length of slave
> transactions, e.g. max_standby_delay. It seems there is no need for a
> user switch --- just max_standby_delay really high.
Well that GUC looks like it allows to set a compromise between HA and
rep
Dimitri Fontaine wrote:
> Tom Lane writes:
> > Well, as Heikki said, a stop-and-go WAL management approach could deal
> > with that use-case. What I'm concerned about here is the complexity,
> > reliability, maintainability of trying to interlock WAL application with
> > slave queries in any sort
> No, what generally happens is it fails to find a matching index entry at
> all, because the search algorithm concludes there can be no match based
> on the limited set of comparisons it's done. Transitivity failures lead
> to searching the wrong subset of the index.
>
Actually Tom, i am not abl
Greg Stark writes:
> On Fri, Feb 26, 2010 at 9:19 PM, Tom Lane wrote:
>> There's *definitely* not going to be enough information in the WAL
>> stream coming from a master that doesn't think it has HS slaves.
>> We can't afford to record all that extra stuff in installations for
>> which it's just
Tom Lane writes:
> Well, as Heikki said, a stop-and-go WAL management approach could deal
> with that use-case. What I'm concerned about here is the complexity,
> reliability, maintainability of trying to interlock WAL application with
> slave queries in any sort of fine-grained fashion.
Some ad
Gokulakannan Somasundaram writes:
>> It does. The point is that the system is set up to limit the bad
>> consequences. You might (will) get wrong query answers, but the
>> heap data won't get corrupted.
>>
> Again Tom, if there is an update based on index scan, then it takes the
> tupleid and u
On Fri, Feb 26, 2010 at 9:19 PM, Tom Lane wrote:
> There's *definitely* not going to be enough information in the WAL
> stream coming from a master that doesn't think it has HS slaves.
> We can't afford to record all that extra stuff in installations for
> which it's just useless overhead. BTW, h
bruce wrote:
> 4 The standby waiting longer than max_standby_delay to acquire a
...
> #4 can be controlled by max_standby_delay, where a large value only
> delays playback during crash recovery --- again, a rare occurance.
One interesting feature is that max_standby_delay will _only_ del
Greg Stark writes:
> Why shouldn't it have any queries at walreceiver startup? It has any
> xlog segments that were copied from the master and any it can find in
> the archive, it could easily reach a consistent point long before it
> needs to connect to the master. If you really want to protect y
On Fri, Feb 26, 2010 at 8:30 PM, Tom Lane wrote:
> How's it going to do that, when it has no queries at the instant
> of startup?
>
Why shouldn't it have any queries at walreceiver startup? It has any
xlog segments that were copied from the master and any it can find in
the archive, it could easi
> Well, as Heikki said, a stop-and-go WAL management approach could deal
> with that use-case. What I'm concerned about here is the complexity,
> reliability, maintainability of trying to interlock WAL application with
> slave queries in any sort of fine-grained fashion.
This sounds a bit brute-
> It does. The point is that the system is set up to limit the bad
>> consequences. You might (will) get wrong query answers, but the
>> heap data won't get corrupted.
>>
>>
> Tom,
if this is our goal - *"can return wrong query answers, but
should not corrupt the heap data.*" and if we
Mark Mielke writes:
> Here are parts that can be done "fixed":
> 1) Statement parsing and error checking.
> 2) Identification of tables and columns involved in the query.
The above two are done in the parser, not the planner.
> 3) Query the column statistics for involved columns, to be used in
Mark Mielke wrote:
On 02/26/2010 03:11 PM, Yeb Havinga wrote:
Or instead of letting users give the distribution, gather it
automatically in some plan statistics catalog? I suspect in most
applications queries stay the same for months and maybe years, so
after some number of iterations it is po
Greg Stark writes:
> On Fri, Feb 26, 2010 at 7:16 PM, Tom Lane wrote:
>> I don't see a "substantial additional burden" there. What I would
>> imagine is needed is that the slave transmits a single number back
>> --- its current oldest xmin --- and the walsender process publishes
>> that number a
On 02/26/2010 03:11 PM, Yeb Havinga wrote:
Tom Lane wrote:
Right, but if the parameter is unknown then its distribution is also
unknown. In any case that's just nitpicking, because the solution is
to create a custom plan for the specific value supplied. Or are you
suggesting that we should cre
Hello list,
I'm wondering if there would be community support for adding using the
execute message with a rownum > 0 in the c libpq client library, as it
is used by the jdbc driver with setFetchSize.
kind regards,
Yeb Havinga
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.
On 02/26/2010 02:57 PM, Tom Lane wrote:
Mark Mielke writes:
There must be some way to lift the cost of planning out of the plan
enumeration and selection phase, such that only plan enumeration and
selection is run at execute time. In most cases, plan enumeration and
selection, provided that
* Greg Stark [100226 15:10]:
> On Fri, Feb 26, 2010 at 7:16 PM, Tom Lane wrote:
> > I don't see a "substantial additional burden" there. What I would
> > imagine is needed is that the slave transmits a single number back
> > --- its current oldest xmin --- and the walsender process publishes
> >
On Fri, 2010-02-26 at 12:02 -0800, Josh Berkus wrote:
> > I don't see a "substantial additional burden" there. What I would
> > imagine is needed is that the slave transmits a single number back
> > --- its current oldest xmin --- and the walsender process publishes
> > that number as its transact
Heikki Linnakangas writes:
> I don't actually understand how tight synchronization on its own would
> solve the problem. What if the connection to the master is lost? Do you
> kill all queries in the standby before reconnecting?
Sure. So what? They'd have been killed if they individually lost
c
Oleg, Teodor, can you look at this? I tried to fix it in wparser_def.c,
but couldn't figure out how. Thanks.
select distinct token as email
from ts_parse('default', ' first_l...@yahoo.com ' )
where tokid = 4
Patch in attachment, it allows underscore in the middle of local part of email
in
Tom Lane wrote:
Right, but if the parameter is unknown then its distribution is also
unknown. In any case that's just nitpicking, because the solution is
to create a custom plan for the specific value supplied. Or are you
suggesting that we should create a way for users to say "here is the
expe
On Fri, Feb 26, 2010 at 7:16 PM, Tom Lane wrote:
> I don't see a "substantial additional burden" there. What I would
> imagine is needed is that the slave transmits a single number back
> --- its current oldest xmin --- and the walsender process publishes
> that number as its transaction xmin in
Tom Lane wrote:
> Josh Berkus writes:
>> On 2/26/10 10:53 AM, Tom Lane wrote:
>>> I think that what we are going to have to do before we can ship 9.0
>>> is rip all of that stuff out and replace it with the sort of closed-loop
>>> synchronization Greg Smith is pushing. It will probably be several
On Fri, Feb 26, 2010 at 09:50, Robert Haas wrote:
> On Fri, Feb 26, 2010 at 1:29 AM, Alex Hunsaker wrote:
>> Prepared plans + exec plan (new guc/ protocol thing):
>> Use: not quite sure
>> Problems: slow because it would replan every time
>> Solutions: use a prepared plan with the appropriate
> I don't see a "substantial additional burden" there. What I would
> imagine is needed is that the slave transmits a single number back
> --- its current oldest xmin --- and the walsender process publishes
> that number as its transaction xmin in its PGPROC entry on the master.
If the main purp
On Fri, Feb 26, 2010 at 6:30 PM, Gokulakannan Somasundaram
wrote:
> http://archives.postgresql.org/pgsql-hackers/2008-03/msg00682.php
> I think, the buy-in became difficult because of the code quality.
>
Er, yeah. That's something we need to work on a bit. You should
probably expect your first fe
Mark Mielke writes:
> There must be some way to lift the cost of planning out of the plan
> enumeration and selection phase, such that only plan enumeration and
> selection is run at execute time. In most cases, plan enumeration and
> selection, provided that all data required to make these dec
> It does. The point is that the system is set up to limit the bad
> consequences. You might (will) get wrong query answers, but the
> heap data won't get corrupted.
>
>
Again Tom, if there is an update based on index scan, then it takes the
tupleid and updates the wrong heap data right?
The only
On 02/26/2010 01:59 PM, Tom Lane wrote:
Mark Mielke writes:
Just to point out that I agree, and as per my original post, I think the
only time prepared statements should be re-planned for the statistics
case, is after 'analyze' has run. That sounds like a quicker solution,
and a much smalle
On 02/26/2010 01:59 PM, Tom Lane wrote:
... It's walking around the problem
that the idea of a generic plan is just wrong. The only time a generic
plan is right, is when the specific plan would result in the same.
I think that's a significant overstatement. There are a large number
of cas
Josh Berkus writes:
> On 2/26/10 10:53 AM, Tom Lane wrote:
>> I think that what we are going to have to do before we can ship 9.0
>> is rip all of that stuff out and replace it with the sort of closed-loop
>> synchronization Greg Smith is pushing. It will probably be several
>> months before ever
Erik Rijkers wrote:
> 9.0devel (cvs yesterday) primary+server, with this patch:
> extend_format_of_recovery_info_funcs_v2.patch
> ( http://archives.postgresql.org/pgsql-hackers/2010-02/msg02116.php )
>
> A large (500 GB) restore left to run overnight, gave the below crash. The
> standby was r
On 2/26/10 10:53 AM, Tom Lane wrote:
> I think that what we are going to have to do before we can ship 9.0
> is rip all of that stuff out and replace it with the sort of closed-loop
> synchronization Greg Smith is pushing. It will probably be several
> months before everyone is forced to accept th
Gokulakannan Somasundaram writes:
> But Tom, can you please explain me why that broken ordering example doesn't
> affect the current index scans.
It does. The point is that the system is set up to limit the bad
consequences. You might (will) get wrong query answers, but the
heap data won't get
On 2/26/10 6:57 AM, Richard Huxton wrote:
>
> Can we not wait to cancel the transaction until *any* new lock is
> attempted though? That should protect all the single-statement
> long-running transactions that are already underway. Aggregates etc.
I like this approach. Is it fragile in some non-
Tom Lane wrote:
> I'm going to make an unvarnished assertion here. I believe that the
> notion of synchronizing the WAL stream against slave queries is
> fundamentally wrong and we will never be able to make it work.
> The information needed isn't available in the log stream and can't be
> made av
Mark Mielke writes:
> Just to point out that I agree, and as per my original post, I think the
> only time prepared statements should be re-planned for the statistics
> case, is after 'analyze' has run. That sounds like a quicker solution,
> and a much smaller gain. After 'analyze' of an object
> IIRC, what was being talked about was shoehorning some hint bits into
> the line pointers by assuming that size and offset are multiples of 4.
> I'm not thrilled with having mutable status bits there for reliability
> reasons, but it could be done without breaking a lot of existing code.
> What I
On 02/26/2010 11:27 AM, Tom Lane wrote:
Also, I think there is a lot of confusion here over two different
issues: generic plan versus parameter-specific plan, and bad planner
estimates leading to a wrong plan choice. While the latter is certainly
an issue sometimes, there is no reason to believe
Greg Stark writes:
> In the model you describe any long-lived queries on the slave cause
> tables in the master to bloat with dead records.
Yup, same as they would do on the master.
> I think this model is on the roadmap but it's not appropriate for
> everyone and I think one of the benefits of
Heikki Linnakangas wrote:
> > How to handle situations where the standby goes away for a while,
> > such as a network outage, so that it doesn't block the master from ever
> > cleaning up dead tuples is a concern.
>
> Yeah, that's another issue that needs to be dealt with. You'd probably
> need so
Missed the group..
On Sat, Feb 27, 2010 at 12:00 AM, Gokulakannan Somasundaram <
gokul...@gmail.com> wrote:
>
> I definitely think thick indexes were too ambitious of a target for a
>> first time patch. Sequential index scans is very ambitious itself
>> despite being significantly simpler (if you
Markus Wanner writes:
> do I understand correctly that a BackendId is just an index into the
> ProcSignalSlots array and not (necessarily) the same as the index into
> ProcArrayStruct's procs?
> If yes, could these be synchronized? Why is ProcSignalSlot not part of
> PGPROC at all? Both are sh
Robert Haas writes:
> On Fri, Feb 26, 2010 at 10:07 AM, Tom Lane wrote:
>> I think this is basically a planner problem and should be fixed in the
>> planner, not by expecting users to make significant changes in
>> application logic in order to create an indirect effect.
> I would agree if I tho
Andrew Dunstan wrote:
>
>
> Bruce Momjian wrote:
>
>
> >>> Don't look further, interfaces/ecpg/include/sqlda.h has changed
> >>> by the pgindent run.
> >>>
> >> Yea, it is that, and sqltypes.h and one other file I am trying to find
> >> now.
> >>
> >
> > I have r
On Fri, Feb 26, 2010 at 4:43 PM, Richard Huxton wrote:
> Let's see if I've got the concepts clear here, and hopefully my thinking it
> through will help others reading the archives.
>
> There are two queues:
I don't see two queues. I only see the one queue of operations which
have been executed o
On Fri, Feb 26, 2010 at 10:21 AM, Heikki Linnakangas
wrote:
> Richard Huxton wrote:
>> Can we not wait to cancel the transaction until *any* new lock is
>> attempted though? That should protect all the single-statement
>> long-running transactions that are already underway. Aggregates etc.
>
> Hmm
1 - 100 of 152 matches
Mail list logo