Robert Haas wrote:
Greg, have you run into any other evidence suggesting a problem with 2.6.32?
I haven't actually checked myself yet. Right now the only distribution
shipping 2.6.32 usefully is Ubuntu 10.04, which I can't recommend anyone
use on a server because their release schedules a
2010/10/7 Robert Haas :
> On Mon, Oct 4, 2010 at 2:52 AM, Pavel Stehule wrote:
>> I am thinking so you can remove a "scrollable cursor support" from
>> ToDo for plpgsql. Scrollable cursors are supported and supported
>> syntax are same as core SQL language.
>
> I agree, removed. I also removed WI
On 07.10.2010 06:39, Robert Haas wrote:
On Tue, Oct 5, 2010 at 3:42 PM, Tom Lane wrote:
Right, *column* filtering seems easy and entirely secure. The angst
here is about row filtering. Can we have a view in which users can see
the values of a column for some rows, with perfect security that t
(2010/10/06 10:21), KaiGai Kohei wrote:
> I'll check the patch for more details, please wait for a few days.
I could find out some matters in this patch, independent from the
discussion of localhost. (About pg_hba.conf.sample, I'm sorry for
the missuggestion. Please fix up it according to Tom's co
On Tue, Oct 5, 2010 at 3:42 PM, Tom Lane wrote:
> Right, *column* filtering seems easy and entirely secure. The angst
> here is about row filtering. Can we have a view in which users can see
> the values of a column for some rows, with perfect security that they
> can't identify values for the h
On Mon, 4 Oct 2010, Marko Tiikkaja wrote:
the patch does, modules start behaving weirdly. So what I'm suggesting is:
- Deprecate pg_parse_and_rewrite(). I have no idea how the project
has done this in the past, but grepping the source code for
"deprecated" suggests that we just remove
On Mon, Oct 4, 2010 at 2:52 AM, Pavel Stehule wrote:
> I am thinking so you can remove a "scrollable cursor support" from
> ToDo for plpgsql. Scrollable cursors are supported and supported
> syntax are same as core SQL language.
I agree, removed. I also removed WITH HOLD cursors, which we seem t
(2010/10/07 2:05), Alvaro Herrera wrote:
>>> Another thing that could raise eyebrows is that I chose to remove the
>>> "missing_ok" argument from get_role_oid_or_public, so it's not a perfect
>>> mirror of it. None of the current callers need it, but perhaps people
>>> would like these functions t
(2010/10/07 6:21), Alvaro Herrera wrote:
Excerpts from Robert Haas's message of mié oct 06 17:02:22 -0400 2010:
2010/10/5 KaiGai Kohei:
However, we also have a few headache cases.
DefineType() creates a new type object and its array type, but it does not
call CommandCounterIncrement() by the
(2010/10/07 6:02), Robert Haas wrote:
> 2010/10/5 KaiGai Kohei:
>> I began to revise the security hooks, but I could find a few cases that does
>> not work with the approach of post-creation security hooks.
>> If we try to fix up the core PG routine to become suitable for the
>> post-creation
>> s
Robert Haas writes:
> Nice. What was the overall effect on memory consumption?
Before:
cspell: 31352608 total in 3814 blocks; 37432 free (6 chunks); 31315176 used
After:
cspell: 16214816 total in 1951 blocks; 13744 free (12 chunks); 16201072 used
This is on a 32-bit machine that uses MAXALIGN
On Wed, Oct 6, 2010 at 7:36 PM, Tom Lane wrote:
> Teodor Sigaev writes:
>>> on 32bit from 27MB (3399 blocks) to 13MB (1564 blocks)
>>> on 64bit from 55MB to cca 27MB.
>
>> Good results. But, I think, there are more places in ispell to use
>> hold_memory():
>> - affixes and affix tree
>> - regis
Teodor Sigaev writes:
>> on 32bit from 27MB (3399 blocks) to 13MB (1564 blocks)
>> on 64bit from 55MB to cca 27MB.
> Good results. But, I think, there are more places in ispell to use
> hold_memory():
> - affixes and affix tree
> - regis (REGex for ISpell, regis.c)
I fixed the affix stuff as mu
Pavel Stehule writes:
> this simple patch reduce a persistent allocated memory for tsearch
> ispell dictionaries.
> on 32bit from 27MB (3399 blocks) to 13MB (1564 blocks)
> on 64bit from 55MB to cca 27MB.
Applied with revisions --- I got rid of the risky static state as per
discussion, and exten
Excerpts from Robert Haas's message of mié oct 06 17:02:22 -0400 2010:
> 2010/10/5 KaiGai Kohei :
> > However, we also have a few headache cases.
> > DefineType() creates a new type object and its array type, but it does not
> > call CommandCounterIncrement() by the end of this function, so the ne
2010/10/5 KaiGai Kohei :
> I began to revise the security hooks, but I could find a few cases that does
> not work with the approach of post-creation security hooks.
> If we try to fix up the core PG routine to become suitable for the
> post-creation
> security hooks, it shall need to put more Com
On Wed, Oct 6, 2010 at 6:21 AM, Magnus Hagander wrote:
> It's not common, but i've certainly come across a number of virtual
> machines where localhost resolves (through /etc/hosts) to the machines
> "real" IP rather than 127.0.01, because 127.0.0.1 simply doesn't
> exist.
It's perfectly fine for
On Wed, 2010-10-06 at 18:04 +0300, Heikki Linnakangas wrote:
> The key is whether you are guaranteed to have zero data loss or not.
We agree that is an important question.
You seem willing to trade anything for that guarantee. I seek a more
pragmatic approach that balances availability and risk.
> Seems reasonable, but what is a CAP database?
Database based around the CAP theorem[1]. Cassandra, Dynamo,
Hypertable, etc.
For us, the equation is: CAD, as in Consistency, Availability,
Durability. Pick any two, at best. But it's a very similar bag of
issues as the ones CAP addresses.
[1]
On 10/06/2010 09:04 PM, Dimitri Fontaine wrote:
> Ok so I think we're agreeing here: what I said amounts to propose that
> the code does work this way when the quorum is such setup, and/or is
> able to reject any non-read-only transaction (those that needs a real
> XID) until your standby is fully
On 06.10.2010 20:57, Josh Berkus wrote:
While it's nice to dismiss case (1) as an edge-case, consider the
likelyhood of someone running PostgreSQL with fsync=off on cloud
hosting. In that case, having k = N = 5 does not seem like an
unreasonable arrangement if you want to ensure durability via
r
Marios Vodas writes:
> I would expect it to start from 0, since C arrays are 0 based.
> So my question is why does this happen?
Well I don't have any good answer other than "it's the API".
Time to have a look at some contrib code and some other, maybe, like
ip4r or prefix (the former is fixed si
Markus Wanner writes:
> There's no point in time I
> ever mind if a standby is a "candidate" or not. Either I want to
> synchronously replicate to X standbies, or not.
Ok so I think we're agreeing here: what I said amounts to propose that
the code does work this way when the quorum is such setup,
Hello Dimitri,
On 10/06/2010 05:41 PM, Dimitri Fontaine wrote:
> - when do you start considering the standby as a candidate to your sync
>rep requirements?
That question doesn't make much sense to me. There's no point in time I
ever mind if a standby is a "candidate" or not. Either I want to
Robert Haas writes:
> On Wed, Oct 6, 2010 at 1:40 PM, Tom Lane wrote:
>> Robert Haas writes:
>>> ...but I don't really see why that has to be done as part of this patch.
>>
>> Because patches that reduce maintainability seldom get cleaned up after.
> I don't think you've made a convincing argu
All,
Let me clarify and consolidate this discussion. Again, it's my goal
that this thread specifically identify only the problems and desired
behaviors for synch rep with more than one sync standby. There are
several issues with even one sync standby which still remain unresolved,
but I believe
On Wed, Oct 6, 2010 at 1:40 PM, Tom Lane wrote:
> Robert Haas writes:
>> I think it would be cleaner to get rid of checkTmpCtx() and instead
>> have dispell_init() set up and tear down the temporary context,
>
> What I was thinking of doing was getting rid of the static variable
> altogether: we
Robert Haas writes:
> I think it would be cleaner to get rid of checkTmpCtx() and instead
> have dispell_init() set up and tear down the temporary context,
What I was thinking of doing was getting rid of the static variable
altogether: we should do what you say above, but in the form of a
state s
Excerpts from KaiGai Kohei's message of mar oct 05 00:06:05 -0400 2010:
> (2010/09/07 6:16), Alvaro Herrera wrote:
> > Excerpts from Jim Nasby's message of jue jun 10 17:54:43 -0400 2010:
> >> test...@workbook=# select has_table_privilege( 'public', 'test', 'SELECT'
> >> );
> >> ERROR: role "publ
On Wed, Oct 6, 2010 at 12:26 PM, Greg Smith wrote:
> Now, the more relevant question, what I actually need in order for a Sync
> Rep feature in 9.1 to be useful to the people who want it most I talk to.
> That would be a simple to configure setup where I list a subset of
> "important" nodes, and
On Mon, Oct 4, 2010 at 2:05 AM, Pavel Stehule wrote:
> 2010/10/4 Robert Haas :
>> On Oct 3, 2010, at 7:02 PM, Tom Lane wrote:
>>> It's not at all apparent that the code is even
>>> safe as-is, because it's depending on the unstated assumption that that
>>> static variable will get reset once per
Josh Berkus wrote:
However, I think we're getting way the heck away from how far we
really want to go for 9.1. Can I point out to people that synch rep
is going to involve a fair bit of testing and debugging, and that
maybe we don't want to try to implement The World's Most Configurable
Stand
Heikki Linnakangas writes:
> I'm sorry, but I still don't understand the use case you're envisioning. How
> many standbys are there? What are you trying to achieve with synchronous
> replication over what asynchronous offers?
Sorry if I've been unclear, I read loads of message then tried to pick
On 10/06/2010 04:20 PM, Simon Riggs wrote:
> Ending the wait state does not cause data loss. It puts you at *risk* of
> data loss, which is a different thing entirely.
These kind of risk scenarios is what sync replication is all about. A
minimum guarantee that doesn't hold in face of the first few
On 06.10.2010 18:02, Dimitri Fontaine wrote:
Heikki Linnakangas writes:
1. base-backup — self explaining
2. catch-up — getting the WAL to catch up after base backup
3. wanna-sync — don't yet have all the WAL to get in sync
4. do-sync — all WALs are there, coming soon
On 06.10.2010 17:20, Simon Riggs wrote:
On Wed, 2010-10-06 at 15:26 +0300, Heikki Linnakangas wrote:
You're not going to get zero data loss that way.
Ending the wait state does not cause data loss. It puts you at *risk* of
data loss, which is a different thing entirely.
Looking at it that w
Heikki Linnakangas writes:
>> 1. base-backup — self explaining
>> 2. catch-up — getting the WAL to catch up after base backup
>> 3. wanna-sync — don't yet have all the WAL to get in sync
>> 4. do-sync — all WALs are there, coming soon
>> 5. ok (async | recv | fsync | reply —
On Wed, 2010-10-06 at 15:26 +0300, Heikki Linnakangas wrote:
> You're not going to get zero data loss that way.
Ending the wait state does not cause data loss. It puts you at *risk* of
data loss, which is a different thing entirely.
If you want to avoid data loss you use N+k redundancy and get o
On 10/06/2010 09:49 AM, Stephen Frost wrote:
* Tom Lane (t...@sss.pgh.pa.us) wrote:
That appears to me to be a broken (non RFC compliant) VM setup.
However, maybe what this is telling us is we need to expose the setting?
Or perhaps better, try 127.0.0.1, ::1, localhost, in that order.
Yeah, I
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> That appears to me to be a broken (non RFC compliant) VM setup.
> However, maybe what this is telling us is we need to expose the setting?
> Or perhaps better, try 127.0.0.1, ::1, localhost, in that order.
Yeah, I'd be happier if we exposed it, to be honest
On Wed, Oct 6, 2010 at 15:34, Tom Lane wrote:
> Magnus Hagander writes:
>> On Wed, Oct 6, 2010 at 15:16, Tom Lane wrote:
>>> However, the usage in pgstat.c is hard-wired, meaning that if you
>>> have a configuration where "localhost" doesn't resolve correctly
>>> for whatever reason, there's no
Magnus Hagander writes:
> On Wed, Oct 6, 2010 at 15:16, Tom Lane wrote:
>> However, the usage in pgstat.c is hard-wired, meaning that if you
>> have a configuration where "localhost" doesn't resolve correctly
>> for whatever reason, there's no simple recourse to get the stats
>> collector working
On Wed, Oct 6, 2010 at 15:16, Tom Lane wrote:
> Andrew Dunstan writes:
>> On 10/06/2010 04:05 AM, Peter Eisentraut wrote:
>>> On tis, 2010-10-05 at 22:17 -0400, Tom Lane wrote:
So far as I can find, there is *no* standard
mandating that localhost means the loopback address.
>
>>> Should
Andrew Dunstan writes:
> On 10/06/2010 04:05 AM, Peter Eisentraut wrote:
>> On tis, 2010-10-05 at 22:17 -0400, Tom Lane wrote:
>>> So far as I can find, there is *no* standard
>>> mandating that localhost means the loopback address.
>> Should we then change pgstat.c to use IP addresses instead of
Peter Eisentraut writes:
> On tis, 2010-10-05 at 22:17 -0400, Tom Lane wrote:
>> So far as I can find, there is *no* standard
>> mandating that localhost means the loopback address.
> Should we then change pgstat.c to use IP addresses instead of hardcoding
> "localhost"?
Hm, perhaps so.
On 06.10.2010 15:22, Dimitri Fontaine wrote:
What is necessary here is a clear view on the possible states that a
standby can be in at any time, and we must stop trying to apply to
some non-ready standby the behavior we want when it's already in-sync.
From my experience operating londiste, thos
Markus Wanner writes:
> On 10/06/2010 04:31 AM, Simon Riggs wrote:
>> That situation would require two things
>> * First, you have set up async replication and you're not monitoring it
>> properly. Shame on you.
>
> The way I read it, Jeff is complaining about the timeout you propose
> that effect
On 10/06/2010 04:05 AM, Peter Eisentraut wrote:
On tis, 2010-10-05 at 22:17 -0400, Tom Lane wrote:
So far as I can find, there is *no* standard
mandating that localhost means the loopback address.
Should we then change pgstat.c to use IP addresses instead of hardcoding
"localhost"?
I under
On 06.10.2010 13:41, Magnus Hagander wrote:
That's only for a narrow definition of availability. For a lot of
people, having access to your data isn't considered availability
unless you can trust the data...
Ok, fair enough. For that, synchronous replication in the "wait forever"
mode is the o
On Wed, Oct 6, 2010 at 10:17, Heikki Linnakangas
wrote:
> On 06.10.2010 11:09, Fujii Masao wrote:
>>
>> On Wed, Oct 6, 2010 at 3:31 PM, Heikki Linnakangas
>> wrote:
>>>
>>> No. Synchronous replication does not help with availability. It allows
>>> you
>>> to achieve zero data loss, ie. if the ma
On Wed, Sep 8, 2010 at 1:02 AM, Teodor Sigaev wrote:
> Fixed, and slightly reworked to be more clear.
> Attached patch is based on your patch.
The patch will improve accuracy of plans using gin indexes.
It only adds block-level statistics information into the meta
pages in gin indexes. Data-level
Tom Lane writes:
> I think the point here is that it's possible to have sync-rep
> configurations in which it's impossible to take a base backup.
Sorry to be slow. I still don't understand that problem.
I can understand why people want "wait forever", but I can't understand
when the following st
On Tue, 2010-10-05 at 11:30 -0400, Steve Singer wrote:
> Also on the topic of failover how do we want to deal with the master
> failing over. Say M->{S1,S2} and M fails and we promote S1 to M1. Can
> M1->S2? What if S2 was further along in processing than S1 when M
> failed? I don't thi
On 10/06/2010 10:53 AM, Heikki Linnakangas wrote:
> Wow, that is really short. Are you sure? I have no first hand experience
> with DRBD,
Neither do I.
> and reading that man page, I get the impression that the
> timeout us just for deciding that the TCP connection is dead. There is
> also the ko
On 06.10.2010 11:49, Fujii Masao wrote:
On Wed, Oct 6, 2010 at 5:17 PM, Heikki Linnakangas
wrote:
Sure, but it's not the synchronous aspect that increases availability. It's
the replication aspect, and we already have that. Making the replication
synchronous allows zero data loss in case the m
On 06.10.2010 11:39, Markus Wanner wrote:
On 10/06/2010 10:17 AM, Heikki Linnakangas wrote:
On 06.10.2010 11:09, Fujii Masao wrote:
Hmm.. but we can increase availability without any data loss by using
synchronous
replication. Many people have already been using synchronous
replication software
On Wed, Oct 6, 2010 at 5:17 PM, Heikki Linnakangas
wrote:
> On 06.10.2010 11:09, Fujii Masao wrote:
>>
>> On Wed, Oct 6, 2010 at 3:31 PM, Heikki Linnakangas
>> wrote:
>>>
>>> No. Synchronous replication does not help with availability. It allows
>>> you
>>> to achieve zero data loss, ie. if the
On 10/06/2010 10:17 AM, Heikki Linnakangas wrote:
> On 06.10.2010 11:09, Fujii Masao wrote:
>> Hmm.. but we can increase availability without any data loss by using
>> synchronous
>> replication. Many people have already been using synchronous
>> replication softwares
>> such as DRBD for that purpo
On 5 October 2010 21:17, Bernd Helmle wrote:
> Basic summary of this patch:
>
Thanks for the review.
> * The patch includes a fairly complete discussion about INSTEAD OF triggers
> and their usage on views. There are also additional enhancements to the RULE
> documentation, which seems, given th
On 06.10.2010 11:09, Fujii Masao wrote:
On Wed, Oct 6, 2010 at 3:31 PM, Heikki Linnakangas
wrote:
No. Synchronous replication does not help with availability. It allows you
to achieve zero data loss, ie. if the master dies, you are guaranteed that
any transaction that was acknowledged as commi
On Wed, Oct 6, 2010 at 3:31 PM, Heikki Linnakangas
wrote:
> No. Synchronous replication does not help with availability. It allows you
> to achieve zero data loss, ie. if the master dies, you are guaranteed that
> any transaction that was acknowledged as committed, is still committed.
Hmm.. but w
On 10/06/2010 08:31 AM, Heikki Linnakangas wrote:
> On 06.10.2010 01:14, Josh Berkus wrote:
>> Last I checked, our goal with synch standby was to increase availablity,
>> not decrease it.
>
> No. Synchronous replication does not help with availability. It allows
> you to achieve zero data loss, ie
On tis, 2010-10-05 at 22:17 -0400, Tom Lane wrote:
> So far as I can find, there is *no* standard
> mandating that localhost means the loopback address.
Should we then change pgstat.c to use IP addresses instead of hardcoding
"localhost"?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@po
On Wed, Oct 6, 2010 at 10:52 AM, Jeff Davis wrote:
> I'm not sure I entirely understand. I was concerned about the case of a
> standby server being allowed to lag behind the rest by a large number of
> WAL records. That can't happen in the "wait for all servers to apply"
> case, because the system
On 10/06/2010 04:31 AM, Simon Riggs wrote:
> That situation would require two things
> * First, you have set up async replication and you're not monitoring it
> properly. Shame on you.
The way I read it, Jeff is complaining about the timeout you propose
that effectively turns sync into async repli
On 06.10.2010 01:14, Josh Berkus wrote:
You start a new one from the latest base backup and let it catch up?
Possibly modifying the config file in the master to let it know about
the new standby, if we go down that path. This part doesn't seem
particularly hard to me.
Agreed, not sure of the is
66 matches
Mail list logo