count of 5. So I'm reluctant to rely on
> that for much of anything.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes
olutely deprecate it first in that place. Preferrably
> visibly (e.g. with a log message when people use it). That could at least
> get those people who use it to let us know they do, to that way figure out
> if they do - and can de-deprecate it.
>
> Or if someone wants to fix it prope
of JSON <-> Row/setof in core, I
could see this being a very nice "RPC" mechanism for PostgreSQL.
Plain HTTP still give's you the session/transaction control problem of
stateless clients, but maybe coupled with PgPool you could cobbl
mistake.
>
>
If all you want is to avoid the write storms when fsyncs start happening on
slow storage, can you not just adjust the kernel vm.dirty* tunables to
start making the kernel write out dirty buffers much sooner instead of
letting them accumulate until fsyncs force them out all at once?
>so wise. They would only be "bug
fixes" if I did something wrong in my stuff.. Anything not compatible
woudl bump the first number.
If it's a "prefix" type match, then the PG versionins woudl work too,
for intsance:
upgrade-9.0.=...
would mat
convention on me to maitain/install an upgrade script for
every single version is way more than asking me to just specify an
upgrade script for versions.
Again, I'ld love for the "version" to support some sort of prefix or
wildcard matching, so I could do:
upgrade-1.* = $
to have that error if I give a bad set of version matches.
If only have those 2 lines to manage, it's a lot more likely I won't
mess them up than if I have to manage 30 almost identical lines and
not miss/duplicate a version.
;-)
--
Aidan Van Dyk
etwork, and the
admins have set it be very network tolerant.
The ACK might report that the salve is hopelessly behind on
fsyncing/applying it's WAL, but that's good too. At least then the
ACK comes back, and the master knows the slave is still churning away
on th
d buffers.
Being able to arbitrary (i.e at any point in time) prove that the
shared buffers contents are exactly what they should be may be a
worthy goal, but that's many orders of magnitude more difficult than
verifying that the bytes we read from disk are the ones we wrote to
don't deny the rest of us airbags while you keep working on
teleportation ;-)
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/
uot;search the bugtracker" is no less rude than "search the archives"...
And most of the bugtrackers I've had to search have way *less*
ease-of-use for searching than a good mailing list archive (I tend to
keep going back to gmane's search)
a.
--
Aidan Van Dyk
ive of
anything else in the database. And path has to be encoding aware.
And you want names that glob well, so for instance, you could exclude
*.data (or a schema) from the diff.
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca
On Wed, Dec 29, 2010 at 2:27 AM, Joel Jacobson wrote:
So, how different (or not) is this to the "directory" format that was
coming out of the desire of a parallel pg_dump?
a.
--
Aidan Van Dyk Create like a god,
ai...@h
On Wed, Dec 29, 2010 at 9:11 AM, Gurjeet Singh wrote:
> On Wed, Dec 29, 2010 at 8:31 AM, Joel Jacobson wrote:
>>
>>
>> 2010/12/29 Aidan Van Dyk
>>>
>>> On Wed, Dec 29, 2010 at 2:27 AM, Joel Jacobson
>>> wrote:
>>>
>>>
>>
e segment.
This get's you an archive synced as it's made (as long as streamrecv
is running), and my "verify"archive command would make sure that if
for some reason, the backup archive went "down", the wal segments
would be blocked on the master until it's up agai
e commit packet stuffed out the
network, you're in the same boat. The data might be committed, even
though you didn't get the commit packet, and when your DB recovers,
it's got the committed data that you never "knew" was committed.
a.
we have the problem even on a
single pg cluster on a single machine. But the point is that if
you've committed, any new transactions see *at least* that data or
newer. But no chance of older.
But personally, I'm not interested in that ;-)
--
Aidan Va
there is a chance I (my database
system) confirmed a transaction that I can't recover.
So sync rep with 1st past post already makes my job easier. I'll take
it over nothing ;-)
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca
> backups less dependent on CPU, among them:
>
> - Making the on-disk representation smaller
> - Making COPY more efficient
>
> As far as I know, none of this work is public yet.
pg_dump is another story. But it's not related to base backups for
PIT Recovery/Replication.
a.
--
A
uot; is closed.
A FIFO would allow postmaster to not need as many file handles, and
clients reading the fifo would notice when the writer (postmaster)
closes it.
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca
gs being backwards
compatible as long as there's a very easy way to know what I might be
looking at, so conversion is easy...
But then again, I don't have multiple gigabytes of logs to process either.
a.
--
Aidan Van Dyk Create like a
27;ld like it if PG couldn't do anything to generate
any user-initiated WAL unless there is a sync slave connected. Yes, I
understand that leads to hard-fail, and yes, I understand I'm in the
minority, maybe almost singular in that desire.
a.
--
Aidan Van
On Fri, Jan 21, 2011 at 1:03 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Fri, Jan 21, 2011 at 12:23 PM, Aidan Van Dyk wrote:
>>> When no sync slave is connected, yes, I want to stop things hard.
>
>> What you're proposing is to fail things earlier than absolu
ve fsync, write WAL, fsync WAL,
send WAL, wait for slave fsync". And it's expense is all the time,
rather than just when the "no slave no go" situations arise.
And it doesn't reduce the transactions I need to verify by hand
either, because tha
On Sep 4, 2012 6:06 PM, "Andrew Dunstan" wrote:
>
>
> Frankly, I have had enough failures of parallel make that I think doing
this would generate a significant number of non-repeatable failures (I had
one just the other day that took three invocations of make to get right).
So I'm not sure doing t
ute, and especially
> accounting for programs that need multiple backends.
>
> --
> fdr
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
--
Aidan
reason why I suggested up thread tring to decouple
the *starting* of the backend with the "options" to PQ connect...
A "Helper function" in libpq could easily start the backend, and
possibly return a conninfostring to give PQconnectdb...
But if they are decoupled, I could easily envisi
predictable error when it happens.
> E.g. a first step in the regression tests that just verifies what kind
> of line endings are in a file. Could maybe be as simple as checking
> the size of the file?
This leads to making sure you keep your "verification list" in source,
and
ing_strings is something the "community" wants to go
towards, then I say do it now, before we're locked into another release
and another year of it.
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca
t sure why "streaming recovery" suddenly changes the requirements...
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/ work like a slave.
signature.asc
Description: Digital signature
n't try and tell me your just "poaching" files from a running
cluster's pg_xlog directory, because I'm going to cry...
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/ work like a slave.
signature.asc
Description: Digital signature
ists, and one of the reasons we alway see warnings about using
rsync instead of plain SCP, etc.
So ya, we should probably mention that somewhere in the docs. Section
24.3.6. Caveats?
a.
--
Aidan Van Dyk Create like a go
efore the copy has finished (i.e.
the master is pushing the WAL over a WAN to a 2nd site), and have my
restore complete consistently...
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/ work like a slave.
signature.asc
Description: Digital signature
want situation #2, and hopefully the knob to control how long it allows
a "stop" before going again can be a HUP'able knob so I can change it
occasionally without taking the server down...
--
Aidan Van Dyk Cre
ffs for everyone.
Would we (ya, the royal we) be willing to say that if you want the
benifit of removing the MVCC overhead of long-running queries you need
to run PITR backup/archive recovery, and if you want SR, you get a
closed-loop master-follows-save-xmin behaviour?
a.
--
Aidan Van Dyk
playing all statements of
the transaction successively is a good idea...
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/ work like a slave.
signature.asc
Description: Digital signature
..
"These" isn't wrong, but if people are being confused about the objects
"these" refer to, being explicit can at least avoid that confusion.
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca
mize that by keepdng more of them in buffers (shared, or OS
cache), but the WAL producer, by it's very nature being a
multi-task-io-load producing random read/write is always going to go
quicker than single-stream random-io WAL consumer...
a.
--
Aidan Van Dyk
And I see now that he's doing a stream of read-only queries on a slave,
presumably with no WAL even being replayed...
Sorry for the noise
a.
* Aidan Van Dyk [100412 09:40]:
> * Robert Haas [100412 07:10]:
>
> > I think we need to investigate this more. It's
hink it sort of just died. I'm in favour of making sure we don't
give out any extra information, so if the objection to the message is
simply that "no pg_hba.conf entry" is "counterfactual" when there is an
entry rejecting it, how about:
&qu
et/fs configuration, but imagine how nice it will be
when it can all be done in userspace with just PG (and pg-compatible)
tool, etc...
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
htt
ffer to file + sync + replay
That should give you all the sync levels they talked about in their
presentation...
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca
ut my
deployment of PITR slaves w/ 9 wrt making sure I'm explicit in all the
settings I can find...
And I'll make sure I look more carefully at logs when deploying 9 as well
;-)
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca
I'ld probalby be happy with PG 9 having a "default" config
of:
wal_mode = hot_standby
recovery_connections = on
Make it set to generate enough WAL and actually do recovery connections.
But also make the recover_connections boolean really mean what it
been configured to run in a state it can't, I would prefer it
didn't run, not that it ran, but in a slightly different state...
But I know that's just a preference... And one from an old-school unix
admin too...
a.
--
Aidan Van Dyk Cre
;s
method for the switch to git for the linux kernel is often the best (if
not right) approach...
If you want, I know a guy in Ottawa that does really fantastic git
presentations... He's done them to many of the local *UGs, Is there
interest from the
he current CVS, because you can avoid the problem of
broken CVS checkouts...
Of course, if the repository was git, the buildfarm wouldn't need to
"worry" if the git repository/commit it's fetching is "a good
approximation of the CVS" ;-)
a.
--
A
ing I can understand
and use to make a reasonable estimate as to when data I know is live on
the primary will be seen on the standby...
bonus points if it works similarly for archive recovery ;-)
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca
* Magnus Hagander [100519 11:08]:
> How do the distros generaly deal with that? E.g. do we have to wait
> for RHEL7 for it to actually show up in redhat?
Don't worry, 9.0 won't show up in redhat for a while yet either...
;-)
-
tages" of WAL processing on the remote...
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/ work like a slave.
--
Sent via p
* the remote", or "it's a write *by*
the remote". But when combined with other terms, only one makes sense
in all cases.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
100s of GB of data
in my pg directory, the *only* corruption is that a single file
pg_control file is missing?
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
htt
n it..
> Yuck. The aim is to improve on whats done today ;)
>
> --
> Andres Freund http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services
>
> --
> Sent via pgsql-hackers mailing list (pgsq
ange directly from A, how does it
>> know to *not* apply it again?
> The lsn of the change.
So why isn't the LSN good enough for when C propagates the change back to A?
Why does A need more information than C?
a.
--
Aidan Van Dyk
On Wed, Jun 20, 2012 at 3:49 PM, Andres Freund wrote:
> On Wednesday, June 20, 2012 09:41:03 PM Aidan Van Dyk wrote:
>> On Wed, Jun 20, 2012 at 3:27 PM, Andres Freund
> wrote:
>> >> OK, so in this case, I still don't see how the "origin_id" is even
>>
t;index only" because heap pages aren't
"all visible"...
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/
ns in one file (hopefully
with deterministic ordering) and a sane, simple filename, than have
every function in every database in a separate file with some strange
mess in the filename that makes me cringe every time I see it.
a.
--
Aidan Van Dyk
;re using operators, what would you think is an
appropriate name for the file the operator is dumped into?
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
htt
t has to be worth their time *to
them* to use it.
Witness the hundreds of graves that are he thousands bugzilla bugs out
there filed against even active open-source projects.
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca
plication" camp are there because the guarantees of a simple RAID 1
just isn't good enough for us ;-)
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/
quot;.
Sure, many people don't *really* want that data durability guarantee,
even though they would like the "maybe guaranteed" version of it.
But that fine line is actually a difficult (impossible?) one to define
if you don't know, at the moment of decision, w
ceptible to that, and defending against it, no? ;-)
And they are susceptible to that if they are on PostgreSQL, Oracle, MS
SQL, DB2, etc.
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a kin
http://enterprisedb.com
>
> + It's impossible for everything to be true. +
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
greSQL up yet...
And I want to make sure the dev box that was testing another slave setup
on, which is running in some test area by some other DBA, but not in the
same rack, *can't* through some mis-configuration make my master think
that it's production slave has
ve less downtime, but find that
I'm missing valuable data that was committed, but happend to not be
replicated because no slave was available "yet".
Sync rep is about "data availability", "data recoverability", *and*
"downtime". The three are defi
* Robert Haas [100917 11:24]:
> On Fri, Sep 17, 2010 at 11:22 AM, Simon Riggs wrote:
> > On Fri, 2010-09-17 at 09:36 -0400, Aidan Van Dyk wrote:
> >
> >> I want to have them configured in a fsync WAL/style sync rep, I want to
> >> make sure that if the master
ory is a complicated tangle of merges because you
constantly just re-merge the "CVS HEAD" into your dev branch, then it
might be time to just do a massive "diff" and "apply" anyways ;-)
a.
--
Aidan Van Dyk
On Tue, Sep 21, 2010 at 10:32 PM, Abhijit Menon-Sen wrote:
> That's not it. I ran the same git gc command on my old repository, and
> it didn't make any difference to the size. (I didn't try with a larger
> window size, though.)
Probably lots of it has to do with the delta chains themselves. Th
On Wed, Sep 22, 2010 at 10:19 AM, Heikki Linnakangas
wrote:
>>> Should we allow multiple standbys with the same name to connect to
>>> the master?
>>
>> No. The point of naming them is to uniquely identify them.
>
> Hmm, that situation can arise if there's a network glitch which leads the
> stan
On Wed, Sep 22, 2010 at 8:12 AM, Simon Riggs wrote:
Not speaking to the necessity of standby registration, but...
>> Thinking of this as a sysadmin, what I want is to have *one place* I can
>> go an troubleshoot my standby setup. If I have 12 synch standbys and
>> they're creating too much load
On Wed, Sep 22, 2010 at 4:04 PM, Alvaro Herrera
wrote:
> As far as I can see, I need to go to the master clone, run a checkout
> and pull on each branch, and *then* a pull on the local clone updates to
> the latest head on that branch. It is not enough to pull when the
> master branch is checked
On Thu, Sep 23, 2010 at 11:49 AM, Tom Lane wrote:
> Magnus Hagander writes:
>> On Thu, Sep 23, 2010 at 17:32, Andrew Dunstan wrote:
>>> Are we sure that's going to stop the DOS issue?
>
>> As long as it's done right, I don't see how it wouldn't.
>
> There might be a cleaner way to do it, but aft
On Fri, Sep 24, 2010 at 7:47 AM, Simon Riggs wrote:
> On Fri, 2010-09-24 at 14:12 +0300, Heikki Linnakangas wrote:
>> What I'm saying is that in a two standby situation, if
>> you're willing to continue operation as usual in the master even if
>> the standby is down, you're not doing synchronous r
On Thu, Sep 30, 2010 at 2:09 AM, Heikki Linnakangas
wrote:
> Agreed. Actually, given the lack of people jumping in and telling us what
> they'd like to do with the feature, maybe it's not that important after all.
>> The basic features that I mean is for most basic use case, that is, one
>> mast
On Thu, Sep 30, 2010 at 10:24 AM, Magnus Hagander wrote:
>> That would allow some nice options. I've been thinking what would
>> be the ideal use of this with our backup scheme, and the best I've
>> thought up would be that each WAL segment file would be a single
>> output stream, with the optio
On Fri, Oct 1, 2010 at 11:27 AM, Tom Lane wrote:
> man git-pull sayeth
>
> In its default mode, git pull is shorthand for git fetch followed by
> git merge FETCH_HEAD.
>
> However, I just tried that and it failed rather spectacularly. How do
> you *really* update your local repo without a
On Fri, Oct 1, 2010 at 11:53 AM, Tom Lane wrote:
> Yeah, I don't want a merge. I have these config entries (as per our
> wiki recommendations):
>
> [branch "master"]
> rebase = true
> [branch]
> autosetuprebase = always
>
> and what I really want is to update all my workdirs the sa
On Mon, Oct 4, 2010 at 10:22 AM, Fujii Masao wrote:
> I have one question for clarity:
>
> If we make all the transactions wait until specified standbys have
> connected to the master, how do we take a base backup from the
> master for those standbys? We seem to be unable to do that because
> pg_
On Mon, Oct 4, 2010 at 11:48 PM, Fujii Masao wrote:
> How can we take a base backup for that synchronous standby? You mean
> that we should disable the wait-forever option, start the master, take
> a base backup, shut down the master, enable the wait-forever option,
> start the master, and start
up and synchrounously replicating, it's *not* synchronous replication.
So I'm not arguing that there shouldn't be a way to turn of
synchronous replication once it's on. Hopefully without having to
take down the cluster (pg instance type cluster) But I a
On Thu, Oct 7, 2010 at 10:08 AM, Dimitri Fontaine
wrote:
> Aidan Van Dyk writes:
>> Sure, but that lagged standy is already asynchrounous, not
>> synchrounous. If it was synchronous, it would have slowed the master
>> down enough it would not be lagged.
>
> Agre
#x27;t* think that's what most people are wanting in their "I want 3
of 10 servers to ack the commit".
The difference between good async and sync is only the *guarentee*.
If you don't need the guarantee, you don't need the synchronous part.
a.
--
Aidan Van Dyk
demonstrated that the overhead to report it isn't high.
Again, in the deployments I'm wanting, the "slave" isn't a PG server,
but something like Magnus's stream-to-archive, so I can't query the
slave to see how far behind it is.
a.
--
Aidan Van Dyk
ey are accessed. Standard
unix permissins should easily allow that setup. chmod -w on the
directory the database files go in.
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca c
nformation than *needed*, but I can't see it
ever growing too big, and people doing forensics rarely complain about
having *too much* information available.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca c
k easier when they can't go to the
documented, tried, tested, "normal restore from backup/WAL".
None? Or as much as possible? And what are the tradeoffs.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca
constant] (when talking spec-type jargon)
Never have I thought of the enum label as either a "value", or an
"element". That's not to say anyone else hasn't thought of them
differently. Obvously ;-)
--
Aidan Van Dyk
full-page-write in WAL is going to
take precedence on recovery.
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/ work like
n start using forks to put "other data", that means that
keeping the page layouts is easier, and thus binary upgrades are much
more feasible.
At least, that was my thought WRT checksums being out-of-page.
a.
--
Aidan Van Dyk Create like a go
that
write could be in-consistent.
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/ work like a slave.
--
Sent via pgsql-hackers maili
not write that page, and loose the work the hint-bits did, or do
a full-page WAL of it, so the torn-page checksum is fixed
Both of these are theoretical performance tradeoffs. How badly do we
want to verify on read that it is *exactly* what we thought we wrote?
a.
--
Aidan Van D
te penalty the 1st time they scan the
tables...
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/ work like a slave.
--
Sent v
t; song when people
complain about "pg being too slow"
;-)
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/ work lik
s when the "appside" knows the data's dispensable and
rebuild-able.
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/
nt is *better*
that giving everybody "yet another password" they have to manage, have
users not mis-manage, and make sure users don't mis-use...
So, yes, ident is only as secure as the *network and machines* it's
used on. Passwords are only as secure as
PUs with different
caches that are incoherent to have those problems.
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/ work
7;s
guarding is in another cacheline, because that won't *necessarily*
force cache coherency in your local lock/variable memory.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http:/
n't have some sort of
TAS/memory barrier/cache-coherency stuff in it ;-)
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/
to write an answer to a file that I then read back in bash
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca command like a king,
http://www.highrise.ca/ work like a sla
On Tue, Apr 19, 2011 at 1:57 PM, Kevin Grittner
wrote:
> Aidan Van Dyk wrote:
>
>> And for the "first-hack-that-comes-to-mind", I find my self
>> pulling out the named fifo trick all the time, and just leaving my
>> for/loop/if logic in bash writing SQL com
1 - 100 of 364 matches
Mail list logo