> I see no concurrency problems. If two or more backends visit the same
> tuple, they either write the same value to the same position which
> doesn't hurt, or one sees the other's changes which is a good thing.
AFAIR, on multi-CPU platforms it's possible that second transaction could
see COMMITT
> I see no concurrency problems. If two or more backends visit the same
> tuple, they either write the same value to the same position which
> doesn't hurt, or one sees the other's changes which is a good thing.
AFAIR, on multi-CPU platforms it's possible that second transaction could
see COMMITT
> > > Added to TODO:
> > >
> > > * Allow WAL information to recover corrupted pg_controldata
> > >...
> > > > Using pg_control to get the checkpoint position
> speeds up the
> > > > recovery process, but to handle possible
> corruption of pg_control,
> > > > we should actually im
> http://www.cs.mcgill.ca/~kemme/papers/vldb00.html
Thanks for the link, Darren, I think everyone interested
in discussion should read it.
First, I like approach. Second, I don't understand why
ppl oppose pg-r & 2pc. 2pc is just simple protocol to
perform distributed commits *after* distributed co
> > > void
> > > heap_mark4fk_lock_acquire(Relation relation, HeapTuple tuple) {
Just wonder how are you going to implement it - is it by using
some kind of "read-locks", ie FK transaction "locks" PK to prevent
delete (this is known as "pessimistic" approach)?
About two years ago we discussed with
> Wouldn't it work for cntxDirty to be set not by LockBuffer, but by
> XLogInsert for each buffer that is included in its argument list?
I thought to add separate call to mark context dirty but above
should work if all callers to XLogInsert always pass all
modified buffers - please check.
Vadim
> My presumption would be that if you initialize 2 databases to
> a known identical start, have all the same triggers and rules
> on both, then send all queries to both databases, you will
> have 2 identical databases at the end.
This is wrong assumption. If
1st client executes UPDATE t SET a =
> > Well, PITR without log archiving could be alternative to
> > pg_dump/pg_restore, but I agreed that it's not the big
> > feature to worry about.
>
> Seems like a pointless "feature" to me. A pg_dump dump serves just
> as well to capture a snapshot --- in fact better, since it's likely
> small
> >> It should be sufficient to force a checkpoint when you
> >> start and when you're done --- altering normal operation
> in between is
> >> a bad design.
>
> > But you have to prevent log files reusing while you copy data files.
>
> No, I don't think so. If you are using PITR then you presu
> I really dislike the notion of turning off checkpointing. What if the
> backup process dies or gets stuck (eg, it's waiting for some
> operator to
> change a tape, but the operator has gone to lunch)? IMHO, backup
> systems that depend on breaking the system's normal
> operational behavior
>
> So I think what will work then is pg_copy (hot backup) would:
> 1) Issue an ALTER SYSTEM BEGIN BACKUP command which turns on
> atomic write,
> checkpoints the database and disables further checkpoints (so
> wal files
> won't be reused) until the backup is complete.
> 2) Change ALTER SYSTEM BAC
> Are you sure this is true for all ports?
Well, maybe you're right and it's not.
But with "after-image blocks in log after checkpoint"
you really shouldn't worry about block atomicity, right?
And ability to turn blocks logging on/off, as suggested
by Richard, looks as appropriate for everyone, ?
> > > As long as whole block is saved in log on first after
> > > checkpoint (you made before backup) change to block.
> >
> I thought half the point of PITR was to be able to
> turn off pre-image logging so you can trade potential
> recovery time for speed without fear of data-loss.
> Didn't we h
> > How do you get atomic block copies otherwise?
>
> Eh? The kernel does that for you, as long as you're reading the
> same-size blocks that the backends are writing, no?
Good point.
Vadim
---(end of broadcast)---
TIP 2: you can get off all lis
> > You don't need it.
> > As long as whole block is saved in log on first after
> > checkpoint (you made before backup) change to block.
>
> I thought half the point of PITR was to be able to turn
> off pre-image logging so you can trade potential recovery
Correction - *after*-image.
> time fo
> > So, we only have to use shared buffer pool for local (but probably
> > not for temporary) relations to close this issue, yes? I personally
> > don't see any performance issues if we do this.
>
> Hmm. Temporary relations are a whole different story.
>
> It would be nice if updates on temp re
> > (In particular, I *strongly* object to using the buffer
> manager at all
> > for reading files for backup. That's pretty much
> guaranteed to blow out
> > buffer cache. Use plain OS-level file reads. An OS
> directory search
> > will do fine for finding what you need to read, too.)
>
>
> > The predicate for files we MUST (fuzzy) copy is:
> > File exists at start of backup && File exists at end of backup
>
> Right, which seems to me to negate all these claims about needing a
> (horribly messy) way to read uncommitted system catalog entries, do
> blind reads, etc. What's wron
> Attached is a patch against current CVS that fixes both of the known
> problems with sequences: failure to flush XLOG after a transaction
Great! Thanks... and sorry for missing these cases year ago -:)
Vadim
---(end of broadcast)---
TIP 1: subsc
> > This isn't an issue for a SELECT nextval() standing on
> > its own AFAIK the result will not be transmitted to the
> > client until after the commit happens. But it would be
> > an issue for a select executed inside a transaction
> > block (begin/commit).
>
> The behavior of SELECT nextval()
> >> Um, Vadim? Still of the opinion that elog(STOP) is a good
> >> idea here? That's two people now for whom that decision has
> >> turned localized corruption into complete database failure.
> >> I don't think it's a good tradeoff.
>
> > One is able to use pg_resetxlog so I don't see point in
>
> > But what about BEFORE insert/update triggers which could
> > insert records too?
>
> Well, what about them? It's already possible for a later
> BEFORE trigger to cause the actual insertion to be suppressed,
> so I don't see any difference from what we have now.
> If a BEFORE trigger takes ac
> The effects don't stop propagating there, either. The decision
> not to insert the tuple must be reported up still further, so
> that the executor knows not to run any AFTER INSERT/UPDATE
> triggers and knows not to count the tuple as inserted/updated
> for the command completion report.
But wh
> Moving the test to a system with SCSI disks gave different results.
> There is NO difference between having the indexes on the same disk or
> different disk with the data while running pgbench. So I
> leave it up to you guys as to include the patch or not. I do believe
> that even if performa
> Also I have been running this patch (both 7.1.3 and 7.2devel) against
> some of my companies applications. I have loaded a small database 10G
We are not familiar with your applications. It would be better to see
results of test suit available to the community. pgbench is first to
come in mind.
> I don't understand the WAL issue below, can you explain. The dir name
> is the same name as the database with _index added to it. This is how
> the current datpath stuff works. I really just copied the datpath
> code to get this patch to work...
At the time of after crash recovery WAL is not a
> > The more general and "standard" way to go are TABLESPACEs.
> > But probably proposed feature will be compatible with
> > tablespaces, when we'll got them:
>
> Will it be? I'm afraid of creating a backwards-compatibility
> problem for ourselves when it comes time to implement tablespaces.
As
> > Attached is a patch that adds support for specifying a
> > location for indexes via the "create database" command.
> >
> > I believe this patch is complete, but it is my first .
>
> This patch allows index locations to be specified as
> different from data locations. Is this a feature direc
> > Sep 6 02:09:30 mx postgres[13468]: [9] FATAL 2:
> > XLogFlush: request(1494286336, 786458) is not satisfied --
> > flushed to (23, 2432317444)
First note that Denis could just restart with wal_debug = 1
to see bad request, without code change. (We should ask ppl
to set wal_debug ON in the
> we need to control database changes within BEFORE triggers.
> There is no problem with triggers called by update, but there is
> a problem with triggers called by insert.
>
> We strongly need to know the oid of a newly inserted tuple.
> In this case, we use tg_newtuple of the TriggerData struct
> So, rather than going over everone's IANAL opinons about mixing
> licenses, let's just let Massimo know that it'd just be a lot
> easier to PostgreSQL/BSD license the whole thing, if he doesn't
> mind too much.
Yes, it would be better.
Vadim
---(end of broadcast)--
> Because the code we got from Berkeley was BSD licensed, we
> can't change it, and because many of us like the BSD license
> better because we don't want to require them to release the
> source code, we just want them to use PostgreSQL. And we
> think they will release the source code eventually
> > Besides, anyone who actually wanted to use the userlock
> > code would need only to write their own wrapper functions
> > to get around the GPL license.
>
> This is a part of copyright law that eludes me - can i write
> a replacement function for something so simple that it can
> essentially
> Oops I'm referring to client side cursors in our ODBC
> driver. We have no cross-transaction cursors yet though
> I'd like to see a backend cross-transaction cursor also.
Ops, sorry.
BTW, what are "visibility" rules for ODBC cross-tx cursor?
No Repeatable reads, no Serializability?
Do you hold s
> > AFAICS, if you are holding an open SQL cursor, it is sufficient
> > to check that ctid hasn't changed to know that you have the
> > same, un-updated tuple. Under MVCC rules, VACUUM will be unable
> > to delete any tuple that is visible to your open transaction,
> > and so new-style VACUUM canno
> > Application would explicitly call user_lock() functions in
> > queries, so issue is still not clear for me. And once again -
>
> Well, yes, it calls user_lock(), but the communication is not
> OS-linked, it is linked over a network socket, so I don't think
> the GPL spreads over a socket. Jus
> > > I assume any code that uses contrib/userlock has to be GPL'ed,
> > > meaning it can be used for commercial purposes but can't be sold
> > > as binary-only, and actually can't be sold for much because you
> > > have to make the code available for near-zero cost.
> >
> > I'm talking not about
> > For example, one could use user-locks for processing incoming
> > orders by multiple operators:
> > select * from orders where user_lock(orders.oid) = 1 LIMIT 1
> > - so each operator would lock one order for processing and
> > operators wouldn't block each other. So, could such
> > applicatio
> > If the licence becomes a problem I can easily change it,
> > but I prefer the GPL if possible.
>
> We just wanted to make sure the backend changes were not
> under the GPL.
No, Bruce - backend part of code is useless without interface
functions and I wonder doesn't GPL-ed interface implement
> > I don't see problem here - just a few bytes in shmem for
> > key. Auxiliary table would keep refcounters for keys.
>
> I think that running out of shmem *would* be a problem for such a
> facility. We have a hard enough time now sizing the lock table for
Auxiliary table would have fixed size
> Regarding the licencing of the code, I always release my code
> under GPL, which is the licence I prefer, but my code in the
> backend is obviously released under the original postgres
> licence. Since the module is loaded dynamically and not linked
> into the backend I don't see a problem here.
> yep:
> lock "tablename.colname.val=1"
> select count(*) from tablename where colname=1
> If no rows, insert, else update.
> (dunno if the locks would scale to a scenario with hundreds
> of concurrent inserts - how many user locks max?).
I don't see problem here - just a few bytes in shmem for
k
> Would your suggested implementation allow locking on an
> arbitrary string?
Well, placing string in LOCKTAG is not good so we could
create auxilary hash table in shmem to keep such strings
and use string' address as part of LOCKTAG. New function
(LockRelationKey?) in lmgr.c would first find/pla
1. Just noted this in contrib/userlock/README.user_locks:
> User locks, by Massimo Dal Zotto <[EMAIL PROTECTED]>
> Copyright (C) 1999, Massimo Dal Zotto <[EMAIL PROTECTED]>
>
> This software is distributed under the GNU General Public License
> either version 2, or (at your option) any later ver
1. Just changed
TAS(lock) to pthread_mutex_trylock(lock)
S_LOCK(lock) to pthread_mutex_lock(lock)
S_UNLOCK(lock) to pthread_mutex_unlock(lock)
(and S_INIT_LOCK to share mutex-es between processes).
2. pgbench was initialized with scale 10.
SUN WS 10 (512Mb), Solaris 2.6
> > > We could use POSIX spinlocks/semaphores now but we
> > > don't because of performance, right?
> >
> > No. As long as no one proved with test that mutexes are bad for
> > performance...
> > Funny, such test would require ~ 1 day of work.
>
> Good question. I know the number of function call
> > > * Order duplicate index entries by tid
> >
> > In other words - add tid to index key: very old idea.
>
> I was thinking during index creation, it would be nice to
> order them by tid, but not do lots of work to keep it that way.
I hear this "not do lots of work" so often from you -:)
Da
> New TODO entries are:
>
> * Order duplicate index entries by tid
In other words - add tid to index key: very old idea.
> * Add queue of backends waiting for spinlock
We shouldn't mix two different approaches for different
kinds of short-time internal locks - in one cases we need
> Yes, nowhere near, and yes. Sequence objects require disk I/O to
> update; the OID counter essentially lives in shared memory, and can
> be bumped for the price of a spinlock access.
Sequences also cache values (32 afair) - ie one log record is required
for 32 nextval-s. Sequence' data file is
> OK, we need to vote on whether Oid's are optional,
> and whether we can have them not created by default.
Optional OIDs: YES
No OIDs by default: YES
> > > However, OID's keep our system tables together.
> >
> > How?! If we want to find function with oid X we query
> > pg_proc, if we want
> If you want to make oids optional on user tables,
> we can vote on that.
Let's vote. I'm proposing optional oids for 2-3 years,
so you know how I'll vote -:)
> However, OID's keep our system tables together.
How?! If we want to find function with oid X we query
pg_proc, if we want to find tab
> >> Given this, I'm wondering why we bother with having a separate
> >> XidGenLock spinlock at all. Why not eliminate it and use SInval
> >> spinlock to lock GetNewTransactionId and ReadNewTransactionId?
>
> > Reading all MyProc in GetSnashot may take long time - why disallow
> > new Tx to begi
> > Isn't spinlock just a few ASM instructions?... on most platforms...
>
> If we change over to something that supports read vs write locking,
> it's probably going to be rather more than that ... right now, I'm
> pretty dissatisfied with the performance of our spinlocks under load.
We shouldn'
> > Why is it possible in Oracle' world? -:)
>
> Because of there limited features?
And now we limit our additional advanced features -:)
> Think about a language like PL/Tcl. At the time you call a
> script for execution, you cannot even be sure that the Tcl
> bytecode c
> > In good world rules (PL functions etc) should be automatically
> > marked as dirty (ie recompilation required) whenever referenced
> > objects are changed.
>
> Yepp, and it'd be possible for rules (just not right now).
> But we're not in a really good world, so it'll not be
> Anyway, what's the preferred syntax for triggering the rule
> recompilation? I thought about
>
> ALTER RULE {rulename|ALL} RECOMPILE;
>
> Where ALL triggers only those rules where the user actually
> has RULE access right on a relation.
In good world rules (PL fun
> Oh, now I get it: the point is to prevent Tx Old from exiting the set
> of "still running" xacts as seen by Tx S. Okay, it makes sense.
> I'll try to add some documentation to explain it.
TIA! I had no time from '99 -:)
> Given this, I'm wondering why we bother with having a separate
> XidGen
> > You forget about Tx Old! The point is that changes made by
> > Tx Old *over* Tx New' changes effectively make those Tx New'
> > changes *visible* to Tx S!
>
> Yes, but what's that got to do with the order of operations in
> GetSnapshotData? The scenario you describe can occur anyway.
Try to
> I am trying to understand why GetSnapshotData() needs to acquire the
> SInval spinlock before it calls ReadNewTransactionId, rather than after.
> I see that you made it do so --- in the commit at
>
http://www.ca.postgresql.org/cgi/cvsweb.cgi/pgsql/src/backend/storage/ipc/sh
mem.c.diff?r1=1.41&r2
> With stock PostgreSQL... how many committed transactions can one lose
> on a simple system crash/reboot? With Oracle or Informix, the answer
> is zero. Is that true with PostgreSQL in fsync mode? If not, does it
It's true or better say should be, keeping in mind probability of bugs.
> lose all
> On further thought, btbuild is not that badly broken at the moment,
> because CREATE INDEX acquires ShareLock on the relation, so
> there can be no concurrent writers at the page level. Still, it
> seems like it'd be a good idea to do "LockBuffer(buffer,
BUFFER_LOCK_SHARE)"
> here, and probably
> Any better ideas out there?
Names were always hard for me -:)
> Where did the existing lock type names
> come from, anyway? (Not SQL92 or SQL99, for sure.)
Oracle. Except for Access Exclusive/Share Locks.
Vadim
---(end of broadcast)---
TIP 6:
> > Incrementing comand counter is not enough - dirty reads are required
> > to handle concurrent PK updates.
>
> What's that with you and dirty reads? Every so often you tell
> me that something would require them - you really like to
> read dirty things - no? :-)
Dirty things o
> > > update a set a=a+1 where a>2;
> > > ERROR: Cannot insert a duplicate key into unique index a_pkey
> >
> > We use uniq index for UK/PK but shouldn't. Jan?
>
> What else can you use than an index? A "deferred until
> statement end" trigger checking for duplicates? Think it'
> > update a set a=a+1 where a>2;
> > ERROR: Cannot insert a duplicate key into unique index a_pkey
>
> This is a known problem with unique contraints, but it's not
> easy to fix it.
Yes, it requires dirty reads.
Vadim
---(end of broadcast)---
T
> Problem can be demonstrated by following example
>
> create table a (a numeric primary key);
> insert into a values (1);
> insert into a values (2);
> insert into a values (3);
> insert into a values (4);
> update a set a=a+1 where a>2;
> ERROR: Cannot insert a duplicate key into unique index
> Here are some disadvantages to using a "trigger based" approach:
>
> 1) Triggers simply transfer individual data items when they
> are modified, they do not keep track of transactions.
I don't know about other *async* replication engines but Rserv
keeps track of transactions (if I understood
> I had a baby girl on Tuesday. I am working through my
> backlogged emails
> today.
Congratulations -:)
Vadim
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
> > > OTOH it is possible to do without rolling back at all as
> > > MySQL folks have shown us ;)
> >
> > Not with SDB tables which support transactions.
>
> My point was that MySQL was used quite a long time without it
> and still quite many useful applications were produced.
And my point was
> > > Seems overwrite smgr has mainly advantages in terms of
> > > speed for operations other than rollback.
> >
> > ... And rollback is required for < 5% transactions ...
>
> This obviously depends on application.
Small number of aborted transactions was used to show
useless of UNDO in terms
> > > So are whole pages stored in rollback segments or just
> > > the modified data?
> >
> > This is implementation dependent. Storing whole pages is
> > much easy to do, but obviously it's better to store just
> > modified data.
>
> I am not sure it is necessarily better. Seems to be a tradeof
> > Removing dead records from rollback segments should
> > be faster than from datafiles.
>
> Is it for better locality or are they stored in a different way ?
Locality - all dead data would be localized in one place.
> Do you think that there is some fundamental performance advantage
> in mak
> > > >Oracle has MVCC?
> > >
> > > With restrictions, yes.
> >
> > What restrictions? Rollback segments size?
>
> No, that is not the whole story. The problem with their
> "rollback segment approach" is, that they do not guard against
> overwriting a tuple version in the rollback segment.
> Th
> Do we want to head for an overwriting storage manager?
>
> Not sure.
>
> Advantages: UPDATE has easy space reuse because usually done
> in-place, no index change on UPDATE unless key is changed.
>
> Disadvantages: Old records have to be stored somewhere for MVCC use.
> Could limit transa
> > > >Oracle has MVCC?
> > >
> > > With restrictions, yes.
> >
> > What restrictions? Rollback segments size?
> > Non-overwriting smgr can eat all disk space...
>
> Is'nt the same true for an overwriting smgr ? ;)
Removing dead records from rollback segments should
be faster than from datafile
> > - A simple typo in psql can currently cause a forced
> > rollback of the entire TX. UNDO should avoid this.
>
> Yes, I forgot to mention this very big advantage, but undo is
> not the only possible way to implement savepoints. Solutions
> using CommandCounter have been discussed.
This would
> I think so too. I've never said that an overwriting smgr
> is easy and I don't love it particularily.
>
> What I'm objecting is to avoid UNDO without giving up
> an overwriting smgr. We shouldn't be noncommittal now.
Why not? We could decide to do overwriting smgr later
and implement UNDO then
> If PostgreSQL wants to stay MVCC, then we should imho forget
> "overwriting smgr" very fast.
>
> Let me try to list the pros and cons that I can think of:
> Pro:
> no index modification if key stays same
> no search for free space for update (if tuple still
> fits into page)
> >> Impractical ? Oracle does it.
> >
> >Oracle has MVCC?
>
> With restrictions, yes.
What restrictions? Rollback segments size?
Non-overwriting smgr can eat all disk space...
> You didn't know that? Vadim did ...
Didn't I mention a few times that I was
inspired by Oracle? -:)
Vadim
--
> > And, I cannot say that I would implement UNDO because of
> > 1. (cleanup) OR 2. (savepoints) OR 4. (pg_log management)
> > but because of ALL of 1., 2., 4.
>
> OK, I understand your reasoning here, but I want to make a comment.
>
> Looking at the previous features you added, like subqueries,
> (buf->r_locks)--;
> if (!buf->r_locks)
> *buflock &= ~BL_R_LOCK;
>
>
> Or I am missing something...
buflock is per-backend flag, it's not in shmem. Backend is
allowed only single lock per buffer.
Vadim
-
> > 1. Compact log files after checkpoint (save records of uncommitted
> >transactions and remove/archive others).
>
> On the grounds that undo is not guaranteed anyway (concurrent
> heap access), why not simply forget it,
We can set flag in ItemData and register callback function in
buffer
> Correct me if I am wrong, but both cases do present a problem
> currently in 7.1. The WAL log will not remove any WAL files
> for transactions that are still open (even after a checkpoint
> occurs). Thus if you do a bulk insert of gigabyte size you will
> require a gigabyte sized WAL directory.
> > We could keep share buffer lock (or add some other kind of lock)
> > untill tuple projected - after projection we need not to read data
> > for fetched tuple from shared buffer and time between fetching
> > tuple and projection is very short, so keeping lock on buffer will
> > not impact concu
> From: Mikheev, Vadim
> Sent: Monday, May 21, 2001 10:23 AM
> To: 'Jan Wieck'; Tom Lane
> Cc: The Hermit Hacker; 'Bruce Momjian';
> [EMAIL PROTECTED]
Strange address, Jan?
> Subject: RE: [HACKERS] Plans for solving the VACUUM problem
>
>
>
> > My point is that we'll need in dynamic cleanup anyway and UNDO is
> > what should be implemented for dynamic cleanup of aborted changes.
>
> I do not yet understand why you want to handle aborts different than
> outdated tuples.
Maybe because of aborted tuples have shorter Time-To-Live.
And
> > We could keep share buffer lock (or add some other kind of lock)
> > untill tuple projected - after projection we need not to read data
> > for fetched tuple from shared buffer and time between fetching
> > tuple and projection is very short, so keeping lock on buffer will
> > not impact concu
> > Really?! Once again: WAL records give you *physical*
> > address of tuples (both heap and index ones!) to be
> > removed and size of log to read records from is not
> > comparable with size of data files.
>
> So how about a background "vacuum like" process, that reads
> the WAL and does the c
> I hope we can avoid on-disk FSM. Seems to me that that would create
> problems both for performance (lots of extra disk I/O) and reliability
> (what happens if FSM is corrupted? A restart won't fix it).
We can use WAL for FSM.
Vadim
---(end of broadcast)-
> > It probably will not cause more IO than vacuum does right now.
> > But unfortunately it will not reduce that IO.
>
> Uh ... what? Certainly it will reduce the total cost of vacuum,
> because it won't bother to try to move tuples to fill holes.
Oh, you're right here, but daemon will most lik
> Vadim, can you remind me what UNDO is used for?
Ok, last reminder -:))
On transaction abort, read WAL records and undo (rollback)
changes made in storage. Would allow:
1. Reclaim space allocated by aborted transactions.
2. Implement SAVEPOINTs.
Just to remind -:) - in the event of error di
> I see postgres 7.1.1 is out now. Was the fix for this
> problem included in the new release?
I fear it will be in 7.2 only.
> On Thursday 29 March 2001 20:02, Philip Warner wrote:
> > At 19:14 29/03/01 -0800, Mikheev, Vadim wrote:
> > >> >Reported problem is
> I have been thinking about the problem of VACUUM and how we
> might fix it for 7.2. Vadim has suggested that we should
> attack this by implementing an overwriting storage manager
> and transaction UNDO, but I'm not totally comfortable with
> that approach: it seems to me that it's an awfully
> Yep, WAL collects all database changes into one file. Easy to see how
> some other host trying to replication a different host would find the
> WAL contents valuable.
Unfortunately, slave database(s) should be on the same platform
(hardware+OS) to be able to use information about *physical*
ch
> > > Row reuse without vacuum
> >
> > Yes, it will help to remove uncommitted rows.
>
> Same question as I asked Bruce ... how? :) I wasn't trying to be
> fascisious(sp?) when I asked, I didn't realize the two were
> connected and am curious as to how :)
After implementing UNDO operation (we
> WAL was a difficult feature to add to 7.1. Currently, it is only used
> as a performance benefit, but I expect it will be used in the future to
Not only. Did you forget about btree stability?
Partial disk writes?
> add new features like:
>
> Advanced Replication
I'm for sure not fan o
> What's the deal with vacuum lazy in 7.1? I was looking
> forward to it. It was never clear whether or not you guys
> decided to put it in.
>
> If it is in as a feature, how does one use it?
> If it is a patch, how does one get it?
> If it is neither a patch nor an existing feature, has
> develo
> As Tom's mentioned the other day, we're looking at doing up v7.1.1 on
> Tuesday, and starting in on v7.2 ...
>
> Does anyone have any outstanding fixes for v7.1.x that they
> want to see in *before* we do this release? Any points unresolved
> that anyone knows about that we need to look at?
Hi
> > There's a report of startup recovery failure in Japan.
> >
> >> DEBUG: redo done at (1, 3923880100)
> >> FATAL 2: XLogFlush: request is not satisfied
> >> postmaster: Startup proc 4228 exited with status 512 - abort
>
> Is this person using 7.1 release, or a beta/RC version? That looks
> j
> One idea Tom had was to make it only active in a transaction,
> so you do:
>
> BEGIN WORK;
> SET TIMEOUT TO 10;
> UPDATE tab SET col = 3;
> COMMIT
>
> Tom is concerned people will do the SET and forget to RESET
> it, causing all queries to be affected by the timeout.
1 - 100 of 357 matches
Mail list logo