On Thu, Apr 07, 2005 at 02:34:03PM +0800, Qingqing Zhou wrote:
>
> "Alvaro Herrera" <[EMAIL PROTECTED]> writes
> > Because we can't reuse MultiXactIds at system crash (else we risk taking
> > an Id which is already stored in some tuple), we need to XLog it. Not
> > at the locking operation, becau
"Alvaro Herrera" <[EMAIL PROTECTED]> writes
> Because we can't reuse MultiXactIds at system crash (else we risk taking
> an Id which is already stored in some tuple), we need to XLog it. Not
> at the locking operation, because we don't want to log that one (too
> expensive.) We can log the curre
Hackers,
Again I come back to this topic. Now I have a better hold on an idea
that is hopefully implementable and doesn't have expensive performance
penalties.
Forget the idea of using the regular lock manager directly on tuples.
It's folly because we can't afford to have that many locks. Inste
Manfred Koizar <[EMAIL PROTECTED]> writes:
> On Thu, 30 Dec 2004 13:36:53 -0500, Tom Lane <[EMAIL PROTECTED]> wrote:
>> Certainly not; indexes depend on locks, not vice versa. You'd not be
>> able to do that without introducing an infinite recursion into the
>> system design.
> Wouldn't you have
On Thu, 30 Dec 2004 13:36:53 -0500, Tom Lane <[EMAIL PROTECTED]> wrote:
>Certainly not; indexes depend on locks, not vice versa. You'd not be
>able to do that without introducing an infinite recursion into the
>system design.
Wouldn't you have to face the same sort of problems if you spill part o
Manfred Koizar <[EMAIL PROTECTED]> writes:
> So the question is whether starting by making nbtree more flexible isn't
> the lower hanging fruit...
Certainly not; indexes depend on locks, not vice versa. You'd not be
able to do that without introducing an infinite recursion into the
system design.
On Wed, 29 Dec 2004 19:57:15 -0500, Tom Lane <[EMAIL PROTECTED]> wrote:
>Manfred Koizar <[EMAIL PROTECTED]> writes:
>> I don't see too much of a difference between #1 (an on-disk structure
>> buffered in shared memory) and #2 (a shared memory structure spilling
>> to disk).
>
>If you stand back tha
Manfred Koizar <[EMAIL PROTECTED]> writes:
> I don't see too much of a difference between #1 (an on-disk structure
> buffered in shared memory) and #2 (a shared memory structure spilling
> to disk).
If you stand back that far, maybe you can't see a difference ;-) ...
but the proposal on the table
On Mon, 20 Dec 2004 21:44:01 +0100, <[EMAIL PROTECTED]> wrote:
>Tom Lane <[EMAIL PROTECTED]> wrote on 20.12.2004, 19:34:21:
>> #1 could have a pretty serious performance impact, too. For small
>> numbers of FOR UPDATE locks (too few to force spill to disk) I would
>> expect #2 to substantially bea
On Thu, 16 Dec 2004 21:54:14 -0300, Alvaro Herrera
<[EMAIL PROTECTED]> wrote:
> Else, it will have to wait, using XactLockTableWait, for
>the first transaction in the array that is still running. We can be
>sure that no one will try to share-lock the tuple while we check the
>btree because we hol
On Mon, Dec 20, 2004 at 03:09:24PM -0300, Alvaro Herrera wrote:
> To solve the problem I want to solve, we have three orthogonal
> possibilities:
>
> 1. implement shared row locking using the ideas outlined in the mail
> starting this thread (pg_clog-like seems to be the winner, details TBD).
>
>
> In general, I agree with Tom: I haven't seen many programs that use
> extended SELECT FOR UPDATE logic. However, the ones I have seen have
> been batch style programs written using a whole-table cursor - these
> latter ones have been designed for the cursor stability approach.
I think if we add
Tom Lane <[EMAIL PROTECTED]> wrote on 20.12.2004, 19:34:21:
> Alvaro Herrera writes:
> > To solve the problem I want to solve, we have three orthogonal
> > possibilities:
>
> > 1. implement shared row locking using the ideas outlined in the mail
> > starting this thread (pg_clog-like seems to be
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> To solve the problem I want to solve, we have three orthogonal
> possibilities:
> 1. implement shared row locking using the ideas outlined in the mail
> starting this thread (pg_clog-like seems to be the winner, details TBD).
> 2. implement shared lock
On Mon, Dec 20, 2004 at 11:47:41AM -0500, Tom Lane wrote:
> To me, "performance buster" is better than "random, unrepeatable
> deadlock failures". In any case, if we find we *can't* implement this
> in a non-performance-busting way, then it would be time enough to look
> at alternatives that forc
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> I may be over my head here, but I think lock spillover is dangerous. In
> the extreme situations where this would happen, it would be a real
> performance buster. Personally, I would rather see locks escalate when
> the table gets full, or at least a
Tom lane wrote:
> Gavin Sherry <[EMAIL PROTECTED]> writes:
> > I think if we allow the lock manager to spill to disk (and I think
we do
> > need to allow it) then we should also be able to control the amount
of
> > shared memory allocated.
>
> You mean like max_locks_per_transaction?
IMO, max_loc
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Gavin also mentioned to me we should also control the amount of memory
> the shared inval queue uses.
Perhaps, but I've really seen no evidence that there's a need to worry
about that. Without demonstrated problems I'd sooner keep that code a
bit simpl
Gavin Sherry <[EMAIL PROTECTED]> writes:
> I think if we allow the lock manager to spill to disk (and I think we do
> need to allow it) then we should also be able to control the amount of
> shared memory allocated.
You mean like max_locks_per_transaction?
regards, tom lan
On Mon, Dec 20, 2004 at 06:23:24PM +1100, Gavin Sherry wrote:
> On Sat, 18 Dec 2004, Bruce Momjian wrote:
> > Agreed. Once concern I have about allowing the lock table to spill to
> > disk is that a large number of FOR UPDATE locks could push out lock
> > entries used by other backends, causing ve
On Mon, 2004-12-20 at 06:34, Jim C. Nasby wrote:
> On Sun, Dec 19, 2004 at 11:35:02PM +0200, Heikki Linnakangas wrote:
> > On Sun, 19 Dec 2004, Tom Lane wrote:
> >
> > >Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> > >>On Sun, 19 Dec 2004, Alvaro Herrera wrote:
> > >>>This is not useful at all,
On Sat, 18 Dec 2004, Bruce Momjian wrote:
> BTom Lane wrote:
> > Bruce Momjian <[EMAIL PROTECTED]> writes:
> > > You mean all empty/zero rows can be removed? Can we guarantee that on
> > > commit we can clean up the bitmap? If not the idea doesn't work.
> >
> > For whatever data structure we use
On Sun, Dec 19, 2004 at 11:35:02PM +0200, Heikki Linnakangas wrote:
> On Sun, 19 Dec 2004, Tom Lane wrote:
>
> >Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> >>On Sun, 19 Dec 2004, Alvaro Herrera wrote:
> >>>This is not useful at all, because the objective of this exercise is to
> >>>downgrade
Tom Lane wrote:
> Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> > On Sun, 19 Dec 2004, Alvaro Herrera wrote:
> >> This is not useful at all, because the objective of this exercise is to
> >> downgrade locks, from exclusive row locking (SELECT ... FOR UPDATE) to
> >> shared row locking.
>
> > Ac
On Sun, 2004-12-19 at 18:31, Alvaro Herrera wrote:
> Thanks for your ideas anyway. And keep having them!
No problem. Just giving some info on what works and doesn't work in
other implementations.
--
Best Regards, Simon Riggs
---(end of broadcast)---
On Sun, 19 Dec 2004, Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
On Sun, 19 Dec 2004, Alvaro Herrera wrote:
This is not useful at all, because the objective of this exercise is to
downgrade locks, from exclusive row locking (SELECT ... FOR UPDATE) to
shared row locking.
Actually
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> On Sun, 19 Dec 2004, Alvaro Herrera wrote:
>> This is not useful at all, because the objective of this exercise is to
>> downgrade locks, from exclusive row locking (SELECT ... FOR UPDATE) to
>> shared row locking.
> Actually it might help in some s
On Sun, 19 Dec 2004, Alvaro Herrera wrote:
On Sun, Dec 19, 2004 at 09:52:01AM +, Simon Riggs wrote:
Simon,
In similar circumstances, DB2 uses these techniques:
- when locktable X % full, then escalate locks to full table locks: both
locktable memory and threshold% are instance parameters
This i
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> On Sun, Dec 19, 2004 at 09:52:01AM +, Simon Riggs wrote:
>> - use a lock mode called Cursor Stability that locks only those rows
>> currently being examined by a cursor, those maintaining the lock usage
>> of a cursor at a constant level as the curso
On Sun, Dec 19, 2004 at 09:52:01AM +, Simon Riggs wrote:
Simon,
> In similar circumstances, DB2 uses these techniques:
>
> - when locktable X % full, then escalate locks to full table locks: both
> locktable memory and threshold% are instance parameters
This is not useful at all, because th
On Sun, 2004-12-19 at 04:04, Bruce Momjian wrote:
> BTom Lane wrote:
> > Bruce Momjian <[EMAIL PROTECTED]> writes:
> > > You mean all empty/zero rows can be removed? Can we guarantee that on
> > > commit we can clean up the bitmap? If not the idea doesn't work.
> >
> > For whatever data structur
BTom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > You mean all empty/zero rows can be removed? Can we guarantee that on
> > commit we can clean up the bitmap? If not the idea doesn't work.
>
> For whatever data structure we use, we may reset the structure to empty
> during backend
Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > Using a B-tree
>
> > At transaction end, nothing special happens (tuples are not unlocked
> > explicitly).
>
> I don't think that works, because there is no guarantee that an entry
> will get cleaned out before the XID counter wraps
Bruce Momjian <[EMAIL PROTECTED]> writes:
> You mean all empty/zero rows can be removed? Can we guarantee that on
> commit we can clean up the bitmap? If not the idea doesn't work.
For whatever data structure we use, we may reset the structure to empty
during backend-crash recovery. So your obj
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Using a B-tree
> At transaction end, nothing special happens (tuples are not unlocked
> explicitly).
I don't think that works, because there is no guarantee that an entry
will get cleaned out before the XID counter wraps around. Worst case,
you might
The SQL spec does not say anything on this respect (that I can find).
It only talks of "FOR UPDATE" and "FOR READ ONLY". However, because the
FK code uses SPI to do the locking, we definitely have to expose the
funcionality through SQL. So I think we need a new clause, which I
propose to be "FOR
Alvaro Herrera wrote:
> The btree idea:
> - does not need crash recovery. Maybe we could use a stripped down
> version of nbtree. This could cause a maintanibility nightmare.
Are you saying the btree is an index with no heap? If so, what about
the xid's? Are they just in the btree?
How does
Hi,
I've been thinking on how to do shared row locking. There are some very
preliminar ideas on this issue. Please comment; particularly if any
part of it sounds unworkable or too incomplete.
There are several problems to be solved here: the grammar, the internal
SelectStmt representation, how
38 matches
Mail list logo