On 9/4/17, 8:16 PM, "Michael Paquier" <michael.paqu...@gmail.com> wrote: > So vacuum_multiple_tables_v14.patch is good for a committer in my > opinion. With this patch, if the same relation is specified multiple > times, then it gets vacuum'ed that many times. Using the same column > multi-times results in an error as on HEAD, but that's not a new > problem with this patch.
Thanks! > So I would tend to think that the same column specified multiple times > should cause an error, and that we could let VACUUM run work N times > on a relation if it is specified this much. This feels more natural, > at least to me, and it keeps the code simple. I think that is a reasonable approach. Another option I was thinking about was to de-duplicate only the individual column lists. This alternative approach might be a bit more user-friendly, but I am beginning to agree with you that perhaps we should not try to infer the intent of the user in these "duplicate" scenarios. I'll work on converting the existing de-duplication patch into something more like what you suggested. Nathan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers