On Tue, May 31, 2011 at 7:03 PM, Greg Stark <gsst...@mit.edu> wrote: > On Tue, May 31, 2011 at 1:07 PM, Alvaro Herrera > <alvhe...@commandprompt.com> wrote: >> Excerpts from Ross J. Reedstrom's message of mar may 31 14:02:04 -0400 2011: >> >>> Follows from one of the practical maxims of databases: "The data is >>> always dirty" Being able to have the constraints enforced at least for >>> new data allows you to at least fence the bad data, and have a shot at >>> fixing it all. >> >> Interesting point of view. I have to admit that I didn't realize I was >> allowing that, even though I have wished for it in the past myself. > > What happens when there's bad data that the new transaction touches in > some minor way? For example updating some other column of the row or > just locking the row?
Updating some other column should fail unless the constraint is satisfied for the resulting row, I think. The rule should be simple and easy to understand: old row (versions) aren't checked, but new ones must satisfy all constraints, whether validated or not. There's no question that this feature has a certain amount of foot-gun potential. But it's also really useful. And there are plenty of people who know how to use a gun safely, without shooting themselves in the foot. We shouldn't aim for the lowest common denominator. > What about things like cluster or table > rewrites? > > Also I think NOT NULL might be used in the join elimination patch. > Make sure it understands the "valid" flag and doesn't drop joins that > aren't needed. It would be nice to have this for unique constraints as > well which would *definitely* need to have the planner understand > whether they're valid or not. Yeah. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers