Hi, On 2018-09-05 01:05:54 -0400, Tom Lane wrote: > Andres Freund <and...@anarazel.de> writes: > > On September 4, 2018 9:11:25 PM PDT, Tom Lane <t...@sss.pgh.pa.us> wrote: > >> I think that line of thought leads to an enormous increase in locking > >> overhead, for which we'd get little if any gain in usability. So my > >> inclination is to make an engineering judgment that we won't fix this. > > > Haven't we already significantly started down this road, to avoid a lot of > > the "tuple concurrently updated" type errors? > > Not that I'm aware of. We do not take locks on schemas, nor functions, > nor any other of the object types I mentioned.
Well, we kinda do, during some of their own DDL. CF AcquireDeletionLock(), RangeVarGetAndCheckCreationNamespace(), and other LockDatabaseObject() callers. The RangeVarGetAndCheckCreationNamespace() even locks the schema an object is created in , which is pretty much what we're discussing here. I thinkt he problem with the current logic is more that the findDependentObjects() scan doesn't use a "dirty" scan, so it doesn't ever get to seeing conflicting operations. > > Would expanding this a git further really be that noticeable? > > Frankly, I think it would be not so much "noticeable" as "disastrous". > > Making the overhead tolerable would require very large compromises > in coverage, perhaps like "we'll only lock during DDL not DML". > At which point I'd question why bother. We've seen no field reports > (that I can recall offhand, anyway) that trace to not locking these > objects. Why would "we'll only lock during DDL not DML" be such a large compromise? To me that's a pretty darn reasonable subset - preventing corruption of the catalog contents is, uh, good? Greetings, Andres Freund