On Fri, Oct 8, 2010 at 10:38 AM, Robert Haas wrote:
> Yes, let's please just implement something simple and get it
> committed. k = 1. Two GUCs (synchronous_standbys = name, name, name
> and synchronous_waitfor = none|recv|fsync|apply)
For my cases, I'm OK with this as the first commit, for now
2010/10/7 Simon Riggs :
> On Thu, 2010-10-07 at 14:10 +0200, Vincenzo Romano wrote:
>
>> Making these things sub-linear (whether not O(log n) or even O(1) ),
>> provided that there's way to, would make this RDBMS more appealing
>> to enterprises.
>> I mean also partial indexes (as an alternative t
On Fri, Oct 8, 2010 at 03:52, Andrew Dunstan wrote:
>
>
> On 10/07/2010 03:37 PM, Magnus Hagander wrote:
>>
>> On Thu, Oct 7, 2010 at 21:31, Andrew Dunstan wrote:
>>>
>>> On 10/07/2010 10:11 AM, Magnus Hagander wrote:
><
> OTOH, this patch seems pretty small and simple to maintain.
>
Hello,
the current plpgsql syntax doesn't offer a functionality to define
some variable with type as element of some other array variable or
reverse order. The primary goal of this proposal is enahancing plpgsql
for better working with polymorphic types.
I propose a following syntax:
-- variable
On Fri, Oct 8, 2010 at 8:44 AM, Greg Smith wrote:
> Additional code? Yes. Foot-gun? Yes. Timeout should be disabled by
> default so that you get wait forever unless you ask for something different?
> Probably. Unneeded? This is where we don't agree anymore. The example
> that Josh Berkus j
On 07.10.2010 21:33, Josh Berkus wrote:
1) This version of Standby Registration seems to add One More Damn Place
You Need To Configure Standby (OMDPYNTCS) without adding any
functionality you couldn't get *without* having a list on the master.
Can someone explain to me what functionality is added
On Fri, Oct 8, 2010 at 10:49 AM, Tom Lane wrote:
> Itagaki Takahiro writes:
>> I wrote a patch to improve CLUSTER VERBOSE (and VACUUM FULL VERBOSE).
>> The patch should be applied after sorted_cluster-20100721.patch .
>
> Applied with minor fixes; in particular, I think you got the effects of
> r
On 07.10.2010 23:56, Greg Stark wrote:
On Thu, Oct 7, 2010 at 10:27 AM, Heikki Linnakangas
wrote:
The standby name is a GUC in the standby's configuration file:
standby_name='bostonserver'
Fwiw I was hoping it would be possible to set every machine up with an
identical postgresql.conf file
On Thu, 2010-10-07 at 19:44 -0400, Greg Smith wrote:
> I don't see this as needing any implementation any more complicated than
> the usual way such timeouts are handled. Note how long you've been
> trying to reach the standby. Default to -1 for forever. And if you hit
> the timeout, mark th
2010/10/8 Tom Lane :
> Robert Haas writes:
>> On Thu, Oct 7, 2010 at 2:53 PM, Pavel Stehule
>> wrote:
>>> b) SRF functions must not be finished by RETURN statement - I know, so
>>> there is outer default block, but it looks like inconsistency for SRF
>>> functions, because you can use a RETURN N
Hello
2010/10/8 Tom Lane :
> Pavel Stehule writes:
>> a) parser allow a labels on invalid positions with strange runtime bug:
>
>> postgres=# CREATE OR REPLACE FUNCTION foo()
>> RETURNS void AS $$
>> BEGIN
>> FOR i IN 1..2
>> <<>
>> LOOP
>> RAISE NOTICE '%',i;
>> END LOOP;
>> END;
>>
On Thu, Oct 7, 2010 at 3:01 AM, Markus Wanner wrote:
> Of course, it doesn't make sense to wait-forever on *every* standby that
> ever gets added. Quorum commit is required, yes (and that's what this
> thread is about, IIRC). But with quorum commit, adding a standby only
> improves availability, b
On Thu, Oct 7, 2010 at 5:01 AM, Simon Riggs wrote:
> You seem willing to trade anything for that guarantee. I seek a more
> pragmatic approach that balances availability and risk.
>
> Those views are different, but not inconsistent. Oracle manages to offer
> multiple options and so can we.
+1
Re
On 10/07/2010 09:52 PM, Andrew Dunstan wrote:
On 10/07/2010 03:37 PM, Magnus Hagander wrote:
On Thu, Oct 7, 2010 at 21:31, Andrew Dunstan
wrote:
On 10/07/2010 10:11 AM, Magnus Hagander wrote:
OTOH, this patch seems pretty small and simple to maintain.
True, it is rather small.
Does any
On Wed, Oct 6, 2010 at 9:22 PM, Dimitri Fontaine wrote:
> From my experience operating londiste, those states would be:
>
> 1. base-backup — self explaining
> 2. catch-up — getting the WAL to catch up after base backup
> 3. wanna-sync — don't yet have all the WAL to get in sync
> 4. do-
Josh Berkus writes:
> I thought we fixed this in 8.4.4, but apparently not. In the event that
> you have a GIN index containing a WHERE clause which is sufficiently
> restrictive, PostgreSQL will attempt to use the index even though it
> can't.
We could probably kluge the planner to avoid that c
Pavel Stehule writes:
> a) parser allow a labels on invalid positions with strange runtime bug:
> postgres=# CREATE OR REPLACE FUNCTION foo()
> RETURNS void AS $$
> BEGIN
> FOR i IN 1..2
> <<>
> LOOP
> RAISE NOTICE '%',i;
> END LOOP;
> END;
> $$ LANGUAGE plpgsql;
> CREATE FUNCTION
>
On Thu, Oct 7, 2010 at 10:24 PM, Fujii Masao wrote:
> On Wed, Oct 6, 2010 at 6:00 PM, Heikki Linnakangas
> wrote:
>> In general, salvaging the WAL that was not sent to the standby yet is
>> outright impossible. You can't achieve zero data loss with asynchronous
>> replication at all.
>
> No. That
On Wed, Oct 6, 2010 at 6:00 PM, Heikki Linnakangas
wrote:
> In general, salvaging the WAL that was not sent to the standby yet is
> outright impossible. You can't achieve zero data loss with asynchronous
> replication at all.
No. That depends on the type of failure. Unless the disk in the master
Robert Haas writes:
> On Thu, Oct 7, 2010 at 2:53 PM, Pavel Stehule wrote:
>> b) SRF functions must not be finished by RETURN statement - I know, so
>> there is outer default block, but it looks like inconsistency for SRF
>> functions, because you can use a RETURN NEXT without RETURN. It maybe
>>
On Wed, Oct 6, 2010 at 6:11 PM, Markus Wanner wrote:
> Yeah, sounds more likely. Then I'm surprised that I didn't find any
> warning that the Protocol C definitely reduces availability (with the
> ko-count=0 default, that is).
Really? I don't think that ko-count=0 means "wait-forever". IIRC,
when
On 10/07/2010 03:37 PM, Magnus Hagander wrote:
On Thu, Oct 7, 2010 at 21:31, Andrew Dunstan wrote:
On 10/07/2010 10:11 AM, Magnus Hagander wrote:
OTOH, this patch seems pretty small and simple to maintain.
True, it is rather small.
Does anybody know if there's an automated way to maintain
2010/10/4 Alexander Korotkov :
> I've reworked patch with your suggestion. In this version I found a little
> slowdown in comparison with previous version:
> SELECT * FROM words WHERE levenshtein_less_equal(a, 'extensize', 2) <= 2;
> 48,069 ms => 57,875 ms
> SELECT * FROM words2 WHERE levenshtein_l
Itagaki Takahiro writes:
> I wrote a patch to improve CLUSTER VERBOSE (and VACUUM FULL VERBOSE).
> The patch should be applied after sorted_cluster-20100721.patch .
Applied with minor fixes; in particular, I think you got the effects of
rewrite_heap_dead_tuple backwards. When a tuple is removed
On Thu, Oct 7, 2010 at 7:15 PM, Greg Smith wrote:
> Josh Berkus wrote:
>>
>> This version of Standby Registration seems to add One More Damn Place
>> You Need To Configure Standby (OMDPYNTCS) without adding any
>> functionality you couldn't get *without* having a list on the master.
>> Can someone
>> b) SRF functions must not be finished by RETURN statement - I know, so
>> there is outer default block, but it looks like inconsistency for SRF
>> functions, because you can use a RETURN NEXT without RETURN. It maybe
>> isn't bug - but I am filling it as inconsistency.
Hmmm. Is there any like
On Thu, Oct 7, 2010 at 2:53 PM, Pavel Stehule wrote:
> Hello,
>
> today I found a few bugs:
>
> a) parser allow a labels on invalid positions with strange runtime bug:
>
> postgres=# CREATE OR REPLACE FUNCTION foo()
> RETURNS void AS $$
> BEGIN
> FOR i IN 1..2
> <<>
> LOOP
> RAISE NOTICE '%'
All,
I thought we fixed this in 8.4.4, but apparently not. In the event that
you have a GIN index containing a WHERE clause which is sufficiently
restrictive, PostgreSQL will attempt to use the index even though it
can't. Since this is completely out of the control of the user, it
effectively pr
(2010/10/08 0:21), Robert Haas wrote:
On Wed, Oct 6, 2010 at 5:21 PM, Alvaro Herrera
wrote:
Excerpts from Robert Haas's message of mié oct 06 17:02:22 -0400 2010:
2010/10/5 KaiGai Kohei:
However, we also have a few headache cases.
DefineType() creates a new type object and its array type,
Takahiro Itagaki writes:
> BTW, we could have LogicalTapeReadExact() as an alias of
> LogicalTapeRead() and checking the result because we have
> many duplicated codes for "unexpected end of data" errors.
Good idea, done.
regards, tom lane
--
Sent via pgsql-hackers mail
Itagaki Takahiro writes:
> I re-ordered some description in the doc. Does it look better?
> Comments and suggestions welcome.
Applied with some significant editorialization. The biggest problem I
found was that the code for expression indexes didn't really work, and
would leak memory like there'
Markus Wanner wrote:
So far I've been under the impression that Simon already has the code
for quorum_commit k = 1.
What I'm opposing to is the timeout "feature", which I consider to be
additional code, unneeded complexity and foot-gun.
Additional code? Yes. Foot-gun? Yes. Timeout shoul
A.M. wrote:
Perhaps a simpler tool could run a basic fsyncs-per-second test and prompt the
DBA to check that the numbers are within the realm of possibility.
This is what the test_fsync utility that already ships with the database
should be useful for. The way Bruce changed it to report n
Josh Kupershmidt writes:
> So I think there are definitely cases where this patch helps, but it
> looks like a seq. scan is being chosen in some cases where it doesn't
> help.
I've been poking through this patch, and have found two different ways
in which it underestimates the cost of the seqscan
Josh Berkus wrote:
This version of Standby Registration seems to add One More Damn Place
You Need To Configure Standby (OMDPYNTCS) without adding any
functionality you couldn't get *without* having a list on the master.
Can someone explain to me what functionality is added by this approach
vs. no
All,
> Establishing an affinity between a session and one of the database
> servers will only help if the traffic is strictly read-only.
I think this thread has drifted very far away from anything we're going
to do for 9.1. And seems to have little to do with synchronous replication.
Synch rep
On Thu, 2010-10-07 at 19:50 +0200, Markus Wanner wrote:
> So far I've been under the impression that Simon already has the code
> for quorum_commit k = 1.
I do, but its not a parameter. The k = 1 behaviour is hardcoded and
considerably simplifies the design. Moving to k > 1 is additional work,
sl
On Thu, 2010-10-07 at 13:44 -0400, Aidan Van Dyk wrote:
> To get "non-stale" responses, you can only query those k=3 servers.
> But you've shot your self in the foot because you don't know which
> 3/10 those will be. The other 7 *are* stale (by definition). They
> talk about picking the "caught
On Thu, 2010-10-07 at 14:10 +0200, Vincenzo Romano wrote:
> Making these things sub-linear (whether not O(log n) or even O(1) ),
> provided that there's way to, would make this RDBMS more appealing
> to enterprises.
> I mean also partial indexes (as an alternative to table partitioning).
> Being
On Thu, Oct 7, 2010 at 10:27 AM, Heikki Linnakangas
wrote:
> The standby name is a GUC in the standby's configuration file:
>
> standby_name='bostonserver'
>
Fwiw I was hoping it would be possible to set every machine up with an
identical postgresql.conf file. That doesn't preclude this idea sinc
All,
In my effort to make the discussion around the design decisions of synch
rep less opaque, I'm starting a separate thread about what has developed
to be one of the more contentious issues.
I'm going to champion timeouts because I plan to use them. In fact, I
plan to deploy synch rep with a t
On Thu, Oct 7, 2010 at 21:31, Andrew Dunstan wrote:
>
>
> On 10/07/2010 10:11 AM, Magnus Hagander wrote:
>>
>>> OTOH, this patch seems pretty small and simple to maintain.
>>
>> True, it is rather small.
>>
>> Does anybody know if there's an automated way to maintain that on
>> freebsd ports, and
Firstly I want to say I think this discussion is over-looking some
benefits of the current system in other use cases. I don't think we
should get rid of the current system even once we have "proper"
partitioning. It solves use cases such as data warehouse queries that
need to do a full table scan o
On 10/07/2010 10:11 AM, Magnus Hagander wrote:
OTOH, this patch seems pretty small and simple to maintain.
True, it is rather small.
Does anybody know if there's an automated way to maintain that on
freebsd ports, and if so, how that works? I want to be *sure* we can't
accidentally upgrade
Robert Haas wrote:
> Establishing an affinity between a session and one of the database
> servers will only help if the traffic is strictly read-only.
Thanks; I now see your point.
In our environment, that's pretty common. Our most heavily used web
app (the one for which we have, at times,
Markus Wanner writes:
> I don't buy that. The risk calculation gets a lot simpler and obvious
> with strict guarantees.
Ok, I'm lost in the use cases and analysis.
I still don't understand why you want to consider the system already
synchronous when it's not, whatever is the guarantee you're as
On Thu, Oct 7, 2010 at 2:31 PM, Kevin Grittner
wrote:
> Robert Haas wrote:
>> Kevin Grittner wrote:
>
>>> With web applications, at least, you often don't care that the
>>> data read is absolutely up-to-date, as long as the point in time
>>> doesn't jump around from one request to the next. Whe
On Thu, Oct 7, 2010 at 2:33 PM, Josh Berkus wrote:
>> I think they work together fine. Greg's idea is that you list the
>> important standbys, and a synchronization guarantee that you'd like to
>> have for at least one of them. Simon's idea - at least at 10,000 feet
>> - is that you can take a p
Hello,
today I found a few bugs:
a) parser allow a labels on invalid positions with strange runtime bug:
postgres=# CREATE OR REPLACE FUNCTION foo()
RETURNS void AS $$
BEGIN
FOR i IN 1..2
<<>
LOOP
RAISE NOTICE '%',i;
END LOOP;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION
ERROR: column
On 10/07/2010 07:44 PM, Aidan Van Dyk wrote:
> The only case I see a "race to quorum" type of k < N being useful is
> if you're just trying to duplicate data everywhere, but not actually
> querying any of the replicas. I can see that "all queries go to the
> master, but the chances are pretty high
On 10/07/2010 03:19 PM, Dimitri Fontaine wrote:
> I think you're all into durability, and that's good. The extra cost is
> service downtime
It's just *reduced* availability. That doesn't necessarily mean
downtime, if you combine cleverly with async replication.
> if that's not what you're after:
> I think they work together fine. Greg's idea is that you list the
> important standbys, and a synchronization guarantee that you'd like to
> have for at least one of them. Simon's idea - at least at 10,000 feet
> - is that you can take a pass on that guarantee for transactions that
> don't nee
Robert Haas wrote:
> Kevin Grittner wrote:
>> With web applications, at least, you often don't care that the
>> data read is absolutely up-to-date, as long as the point in time
>> doesn't jump around from one request to the next. When we have
>> used load balancing between multiple database se
On Thu, Oct 7, 2010 at 2:10 PM, Kevin Grittner
wrote:
> Aidan Van Dyk wrote:
>
>> To get "non-stale" responses, you can only query those k=3
>> servers. But you've shot your self in the foot because you don't
>> know which 3/10 those will be. The other 7 *are* stale (by
>> definition). They tal
Aidan Van Dyk wrote:
> To get "non-stale" responses, you can only query those k=3
> servers. But you've shot your self in the foot because you don't
> know which 3/10 those will be. The other 7 *are* stale (by
> definition). They talk about picking the "caught up" slave when
> the master fails
On Thu, Oct 7, 2010 at 1:27 PM, Heikki Linnakangas
wrote:
> Let me check that I got this right, and add some details to make it more
> concrete: Each standby is given a name. It can be something like "boston1"
> or "testserver". It does *not* have to be unique across all standby servers.
> In the
Simon, Fujii,
What follows are what I see as the major issues with making two-server
synch replication work well. I would like to have you each answer them,
explaining how your patch and your design addresses each issue. I
believe this will go a long way towards helping the majority of the
commu
> But as a practical matter, I'm afraid the true cost of the better
> guarantee you're suggesting here is additional code complexity that will
> likely cause this feature to miss 9.1 altogether. As far as I'm
> concerned, this whole diversion into the topic of quorum commit is only
> consuming re
On Thu, Oct 7, 2010 at 1:45 PM, Josh Berkus wrote:
> On 10/7/10 10:27 AM, Heikki Linnakangas wrote:
>> The standby name is a GUC in the standby's configuration file:
>>
>> standby_name='bostonserver'
>>
>> The list of important nodes is also a GUC, in the master's configuration
>> file:
>>
>> sync
On Thu, Oct 7, 2010 at 1:39 PM, Dave Page wrote:
> On 10/7/10, Heikki Linnakangas wrote:
>> On 06.10.2010 19:26, Greg Smith wrote:
>>> Now, the more relevant question, what I actually need in order for a
>>> Sync Rep feature in 9.1 to be useful to the people who want it most I
>>> talk to. That w
On 10/07/2010 06:41 PM, Greg Smith wrote:
> The cost of hardware capable of running a database server is a large
> multiple of what you can build an alerting machine for.
You realize you don't need lots of disks nor RAM for a box that only
ACKs? A box with two SAS disks and a BBU isn't that expens
> If you want "synchronous replication" because you want "query
> availabilty" while making sure you're not getting "stale" queries from
> all your slaves, than using your k < N (k = 3 and N - 10) situation is
> screwing your self.
Correct. If that is your reason for synch standby, then you shoul
On 10/7/10 10:27 AM, Heikki Linnakangas wrote:
> The standby name is a GUC in the standby's configuration file:
>
> standby_name='bostonserver'
>
> The list of important nodes is also a GUC, in the master's configuration
> file:
>
> synchronous_standbys='bostonserver, oxfordserver'
This seems t
On Thu, Oct 7, 2010 at 1:22 PM, Josh Berkus wrote:
> So if you have k = 3 and N = 10, then you can have 10 standbys and only
> 3 of them need to ack any specific commit for the master to proceed. As
> long as (a) you retain at least one of the 3 which ack'd, and (b) you
> have some way of determi
On 10/7/10, Heikki Linnakangas wrote:
> On 06.10.2010 19:26, Greg Smith wrote:
>> Now, the more relevant question, what I actually need in order for a
>> Sync Rep feature in 9.1 to be useful to the people who want it most I
>> talk to. That would be a simple to configure setup where I list a subse
On 06.10.2010 19:26, Greg Smith wrote:
Now, the more relevant question, what I actually need in order for a
Sync Rep feature in 9.1 to be useful to the people who want it most I
talk to. That would be a simple to configure setup where I list a subset
of "important" nodes, and the appropriate ackn
On 10/7/10 6:41 AM, Aidan Van Dyk wrote:
> I'm really confused with all this k < N scenarious I see bandied
> about, because, all it really amounts to is "I only want *one*
> syncronous replication, and a bunch of synchrounous replications".
> And a bit of chance thrown in the mix to hope the "sync
On Oct 7, 2010, at 12:26 PM, Robert Haas wrote:
> On Thu, Oct 7, 2010 at 11:45 AM, Greg Smith wrote:
>> Robert Haas wrote:
>>> Proposed doc patch attached.
>>
>> Looks accurate to me. I like the additional linking to the Reliability page
>> you put in there too. Heavily referencing that impor
2010/10/7 Stephen Frost :
> * Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
>> So, what'd be the right approach in your vision?
>
> Have you read http://wiki.postgresql.org/wiki/Table_partitioning and the
> various places it links to..?
>
>> I mean, if you think about partitioning a-la Oracl
Markus Wanner wrote:
I think that's a pretty special case, because the "good alerting system"
is at least as expensive as another server that just persistently stores
and ACKs incoming WAL.
The cost of hardware capable of running a database server is a large
multiple of what you can build a
On Thu, Oct 7, 2010 at 11:45 AM, Greg Smith wrote:
> Robert Haas wrote:
>> Proposed doc patch attached.
>
> Looks accurate to me. I like the additional linking to the Reliability page
> you put in there too. Heavily referencing that important page from related
> areas is a good thing, particular
On Thu, Oct 7, 2010 at 11:52 AM, Tom Lane wrote:
> Robert Haas writes:
>> Proposed doc patch attached.
>
> "discusesed"? Otherwise +1
Woops, thanks. Committed with that change. I back-patched it back to
8.3, which is as far as it applied with only minor conflicts.
--
Robert Haas
EnterpriseD
Robert Haas writes:
> Proposed doc patch attached.
"discusesed"? Otherwise +1
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas wrote:
Proposed doc patch attached.
Looks accurate to me. I like the additional linking to the Reliability
page you put in there too. Heavily referencing that important page from
related areas is a good thing, particularly now that it's got a lot more
details than it used to
Aidan Van Dyk writes:
> *shrug* The joining standby is still asynchronous at this point.
> It's not synchronous replication. It's just another ^k of the N
> slaves serving stale data ;-)
Agreed *here*, but if you read the threads again, you'll see that's not
at all what's been talked about befo
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
> So, what'd be the right approach in your vision?
Have you read http://wiki.postgresql.org/wiki/Table_partitioning and the
various places it links to..?
> I mean, if you think about partitioning a-la Oracle, then you'll have to
> parse those
Vincenzo Romano wrote:
> 2010/10/7 Stephen Frost :
>> Yes, that would be the problem. Proving something based on
>> expressions is alot more time consuming and complicated than
>> being explicitly told what goes where.
>
> Consuming computing resources at DDL-time should be OK if that
> will l
2010/10/7 Stephen Frost :
> * Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
>> I would expect a parser to ... ehm ... parse the CHECK constraint
>> expression at "CREATE TABLE " time and
>> extract all the needed "high quality metadata", like the list of
>> columns involved and the type of
>
2010/10/7 Stephen Frost :
> * Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
>> 2010/10/7 Stephen Frost :
>> > * Vincenzo Romano (vincenzo.rom...@notorand
.it) wrote:
>> > The problem is that CHECK conditions can contain just about anything,
>> > hence the planner needs to deal with that poss
On Wed, Oct 6, 2010 at 5:21 PM, Alvaro Herrera
wrote:
> Excerpts from Robert Haas's message of mié oct 06 17:02:22 -0400 2010:
>> 2010/10/5 KaiGai Kohei :
>
>> > However, we also have a few headache cases.
>> > DefineType() creates a new type object and its array type, but it does not
>> > call Co
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
> 2010/10/7 Stephen Frost :
> > * Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
> > The problem is that CHECK conditions can contain just about anything,
> > hence the planner needs to deal with that possibility.
>
> Not really. For par
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
> I would expect a parser to ... ehm ... parse the CHECK constraint
> expression at "CREATE TABLE " time and
> extract all the needed "high quality metadata", like the list of
> columns involved and the type of
> checks (range, value list, etc.
On Tue, Oct 5, 2010 at 8:11 AM, Peter Eisentraut wrote:
> On mån, 2010-10-04 at 23:41 -0400, Robert Haas wrote:
>> > Well, it's not really useful, but that's how it works "everywhere". On
>> > Linux, fsync carries the stuff from the kernel's RAM to the disk
>> > controller's RAM, and then it depe
2010/10/7 Stephen Frost :
> * Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
>> Which kind of information are you thinking about?
>> I think that the stuff you put into the CHECK condition for the table
>> will say it all.
>
> The problem is that CHECK conditions can contain just about anythi
2010/10/7 Alvaro Herrera :
> Excerpts from Vincenzo Romano's message of jue oct 07 10:44:34 -0400 2010:
>
>> Do you mean the check constraint is used as plain text to be (somehow)
>> executed?
>> If this is the case, then you (all) are perfectly and obviously right
>> and I'm just fishing
>> for b
Excerpts from Vincenzo Romano's message of jue oct 07 10:44:34 -0400 2010:
> Do you mean the check constraint is used as plain text to be (somehow)
> executed?
> If this is the case, then you (all) are perfectly and obviously right
> and I'm just fishing
> for bicycles in the sea.
Yeah, hence th
2010/10/7 Greg Smith :
> Vincenzo Romano wrote:
>>
>> I see the main problem in the way the planner "understands" which
>> partition
>> is useful and which one is not.
>> Having the DDL supporting the feature could just be syntactic sugar
>> if the underlying mechanism is inadequate.
>>
>
> You hav
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
> Which kind of information are you thinking about?
> I think that the stuff you put into the CHECK condition for the table
> will say it all.
The problem is that CHECK conditions can contain just about anything,
hence the planner needs to dea
2010/10/7 Stephen Frost :
> * Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
>> I see the main problem in the way the planner "understands" which partition
>> is useful and which one is not.
>> Having the DDL supporting the feature could just be syntactic sugar
>> if the underlying mechanism
Vincenzo Romano wrote:
I see the main problem in the way the planner "understands" which partition
is useful and which one is not.
Having the DDL supporting the feature could just be syntactic sugar
if the underlying mechanism is inadequate.
You have the order of this backwards. In order to
On Thu, Oct 7, 2010 at 10:08 AM, Dimitri Fontaine
wrote:
> Aidan Van Dyk writes:
>> Sure, but that lagged standy is already asynchrounous, not
>> synchrounous. If it was synchronous, it would have slowed the master
>> down enough it would not be lagged.
>
> Agreed, except in the case of a joinin
* Vincenzo Romano (vincenzo.rom...@notorand.it) wrote:
> I see the main problem in the way the planner "understands" which partition
> is useful and which one is not.
> Having the DDL supporting the feature could just be syntactic sugar
> if the underlying mechanism is inadequate.
I'm pretty sure
2010/10/7 Tom Lane :
> Heikki Linnakangas writes:
>> On 07.10.2010 10:41, Simon Riggs wrote:
>>> Constraint exclusion is linear with respect to number of partitions.
>>> Why do you say exponential?
>
>> For some reason I thought the planner needs to check the constraints of
>> the partitions again
On Thu, Oct 7, 2010 at 16:07, Andrew Dunstan wrote:
>
> On 10/07/2010 09:44 AM, Magnus Hagander wrote:
>>
>> On Thu, Oct 7, 2010 at 15:16, Andrew Dunstan wrote:
>>>
>>> On 09/23/2010 01:18 PM, Aidan Van Dyk wrote:
On Thu, Sep 23, 2010 at 11:49 AM, Tom Lane wrote:
>
> Magnus H
Aidan Van Dyk writes:
> Sure, but that lagged standy is already asynchrounous, not
> synchrounous. If it was synchronous, it would have slowed the master
> down enough it would not be lagged.
Agreed, except in the case of a joining standby. But you're saying it
better than I do:
> Yes, I believ
On 10/07/2010 09:44 AM, Magnus Hagander wrote:
On Thu, Oct 7, 2010 at 15:16, Andrew Dunstan wrote:
On 09/23/2010 01:18 PM, Aidan Van Dyk wrote:
On Thu, Sep 23, 2010 at 11:49 AM, Tom Lanewrote:
Magnus Haganderwrites:
On Thu, Sep 23, 2010 at 17:32, Andrew Dunstan
wrote:
Are we su
2010/10/7 Robert Haas :
> Well, you can't just arbitrarily turn a O(n) algorithm into an O(lg n)
That's trivially true. I was not asking for the recipe to do it.
> algorithm. I think the most promising approach to scaling to large
> numbers of partitions is the patch that Itagaki Takahiro was wo
Heikki Linnakangas writes:
> On 07.10.2010 10:41, Simon Riggs wrote:
>> Constraint exclusion is linear with respect to number of partitions.
>> Why do you say exponential?
> For some reason I thought the planner needs to check the constraints of
> the partitions against each other, but you're ri
On Thu, Oct 7, 2010 at 15:16, Andrew Dunstan wrote:
>
>
> On 09/23/2010 01:18 PM, Aidan Van Dyk wrote:
>>
>> On Thu, Sep 23, 2010 at 11:49 AM, Tom Lane wrote:
>>>
>>> Magnus Hagander writes:
On Thu, Sep 23, 2010 at 17:32, Andrew Dunstan
wrote:
>
> Are we sure that's going
On Thu, Oct 7, 2010 at 6:32 AM, Dimitri Fontaine wrote:
> Or if the standby is lagging and the master wal_keep_segments is not
> sized big enough. Is that a catastrophic loss of the standby too?
Sure, but that lagged standy is already asynchrounous, not
synchrounous. If it was synchronous, it w
1 - 100 of 133 matches
Mail list logo