This paper has a brief but interesting discussion of Admission Control in section 2.4: Architecture of a Database System. (Joseph M. Hellerstein, Michael Stonebraker and James Hamilton). Foundations and Trends in Databases 1(2). http://db.cs.berkeley.edu/papers/fntdb07-architecture.pdf They describe a two-tier approach, where the first tier is already effectively implemented in PostgreSQL with the max_connections and superuser_reserved_connections GUCs. The second tier is implemented to run after a plan is chosen, and may postpone execution of a query (or reduce the resources it is allowed) if starting it at that time might overload available resources. I think that implementing something like this could potentially help with several types of problems. We often see posts from people who have more active connections than is efficient. We could, for example, have a policy which queues query requests which are *not* from a superuser and not part of a transaction which has acquired a snapshot or any locks, if the number of active transactions is above a certain threshold. Proper configuration of a policy like this might change the performance graph to stay relatively steady past the "knee" rather than degrading. We occasionally see posts where people have exhausted available RAM and suffered a severe performance hit or a crash, due to an excessively high setting of work_mem or maintenance_work_mem. A good policy might warn and reduce the setting or reschedule execution to keep things from getting too out of hand. A good policy might also reduce conflicts between transactions, making stricter transaction isolation less painful. While this observation motivated me to think about it, it seems potentially useful on its own. It might perhaps make sense to provide some hook to allow custom policies to supplement or override a simple default policy. Thoughts? -Kevin
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers