On May 13, 2011, at 2:07 AM, Alexey Klyukin wrote:
> On May 13, 2011, at 1:28 AM, Tom Lane wrote:
>
>>
>> We're not likely to do that, first because it's randomly different from
>> the handling of every other system catalog update, and second because it
>> would serialize all updates on this ca
Is this a TODO? I don't see it on the TODO list.
---
Robert Haas wrote:
> On Fri, May 13, 2011 at 12:56 AM, Tom Lane wrote:
> > BTW, I thought a bit more about why I didn't like the initial proposal
> > in this thread, and
On Fri, May 13, 2011 at 12:56 AM, Tom Lane wrote:
> BTW, I thought a bit more about why I didn't like the initial proposal
> in this thread, and the basic objection is this: the AccessShareLock or
> RowExclusiveLock we take on the catalog is not meant to provide any
> serialization of operations o
Robert Haas writes:
> On Thu, May 12, 2011 at 6:59 PM, Tom Lane wrote:
>> I didn't say it was ;-). What I *am* saying is that if we're going to
>> do anything about this sort of problem, there needs to be a
>> well-considered system-wide plan. Arbitrarily changing the locking
>> rules for indiv
On Thu, May 12, 2011 at 6:59 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Thu, May 12, 2011 at 6:28 PM, Tom Lane wrote:
>>> We're not likely to do that, first because it's randomly different from
>>> the handling of every other system catalog update,
>
>> We have very robust locking of this t
On May 13, 2011, at 1:28 AM, Tom Lane wrote:
> Alexey Klyukin writes:
>> After digging in the code I've found that a RowExclusiveLock is acquired on
>> a pg_db_role_setting table in AlterSetting(). While the name of the locks
>> suggests that it should conflict with itself, it doesn't. After I
Robert Haas writes:
> On Thu, May 12, 2011 at 6:28 PM, Tom Lane wrote:
>> We're not likely to do that, first because it's randomly different from
>> the handling of every other system catalog update,
> We have very robust locking of this type for table-related DDL
> operations and just about non
On Thu, May 12, 2011 at 6:28 PM, Tom Lane wrote:
> Alexey Klyukin writes:
>> After digging in the code I've found that a RowExclusiveLock is acquired on
>> a pg_db_role_setting table in AlterSetting(). While the name of the locks
>> suggests that it should conflict with itself, it doesn't. Afte
Alexey Klyukin writes:
> After digging in the code I've found that a RowExclusiveLock is acquired on a
> pg_db_role_setting table in AlterSetting(). While the name of the locks
> suggests that it should conflict with itself, it doesn't. After I've replaced
> the lock in question with ShareUpdat
Hello,
We have recently observed a problem with concurrent execution of ALTER ROLE
SET... in several sessions. It's similar to the one from
http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=fbcf4b92aa64d4577bcf25925b055316b978744a
The result is the 'tuple concurrently updated' er
Simon Riggs wrote:
On Wed, 2007-07-11 at 16:49 -0700, Joe Conway wrote:
On cvs head, I can get "tuple concurrently updated" if two separate
transactions are both trying to drop the same index:
ERROR: tuple concurrently updated
The reason I ask is that someone contacted me who is seeing
On Wed, 2007-07-11 at 16:49 -0700, Joe Conway wrote:
> On cvs head, I can get "tuple concurrently updated" if two separate
> transactions are both trying to drop the same index:
>ERROR: tuple concurrently updated
> The reason I ask is that someone contacted me who is seeing this on a
> pro
On 7/11/07, Tom Lane <[EMAIL PROTECTED]> wrote:
Notice that recursiveDeletion() tries to clean out pg_depend before
it actually deletes the target object, and in the current code that
object-specific subroutine is the only thing that takes any sort of lock.
In the past 4-6 months, we've seen 4
Joe Conway <[EMAIL PROTECTED]> writes:
> On cvs head, I can get "tuple concurrently updated" if two separate
> transactions are both trying to drop the same index:
This seems related to the discussions we had awhile back about how
deletion needs to take locks *before* it starts doing anything.
ht
On cvs head, I can get "tuple concurrently updated" if two separate
transactions are both trying to drop the same index:
8<
contrib_regression=# create table t(f1 int);
CREATE TABLE
contrib_regression=# create index idx1 on t(f1);
CREATE INDEX
contrib_regre
On Sun, 28 Jul 2002, Tom Lane wrote:
> Other than the fact that the second CREATE INDEX fails and rolls back,
> there's no problem ;-)
Agh!
So what, in the current version of postgres, are my options for
doing parallel index builds?
cjs
--
Curt Sampson <[EMAIL PROTECTED]> +81 90 7737 2974
Curt Sampson <[EMAIL PROTECTED]> writes:
> On Thu, 25 Jul 2002, Tom Lane wrote:
>> "Kangmo, Kim" <[EMAIL PROTECTED]> writes:
> If the index on the same class,
> two concurrent CREATE INDEX command can update pg_class.relpages
> at the same time.
>>
>> Or try to, anyway. The problem here is that
On Thu, 25 Jul 2002, Tom Lane wrote:
> "Kangmo, Kim" <[EMAIL PROTECTED]> writes:
> > If the index on the same class,
> > two concurrent CREATE INDEX command can update pg_class.relpages
> > at the same time.
>
> Or try to, anyway. The problem here is that the code that updates
> system catalogs
"Kangmo, Kim" <[EMAIL PROTECTED]> writes:
> If the index on the same class,
> two concurrent CREATE INDEX command can update pg_class.relpages
> at the same time.
Or try to, anyway. The problem here is that the code that updates
system catalogs is not prepared to cope with concurrent updates
to
A solution to this problem is not versioning catalog tables but in-place
updating them.
Of course anther transaction that wants to update the same row in the
catalog table should wait,
which leads to bad concurrency.
But this problem can be solved by commiting every DDL right after its
execution s
I guess two transactions updated a tuple concurrently.
Because versioning scheme allows old versions can be
read by another transaction, the old version can be updated too.
For example,
We have a row whose value is 1
create table t1 (i1 integer);
insert into t1 values(1);
And,
Tx1 executes up
One of my machines has two CPUs, and in some cases I build a pair
of indexes in parallel to take advantage of this. (I can't seem
to do an ALTER TABLE ADD PRIMARY KEY in parallel with an index
build, though.) Recently, though, I received the message "ERROR:
simple_heap_update: tuple concurrently
22 matches
Mail list logo