>>> On Wed, Feb 20, 2008 at 5:02 PM, in message <[EMAIL PROTECTED]>, "Josh Berkus" <[EMAIL PROTECTED]> wrote: > imagine you're adminning 250 PostgreSQL servers backing a > social networking application. You decide the application needs a > higher default sort_mem for all new connections, on all 250 servers. > How, exactly, do you deploy that? > > Worse, imagine you're an ISP and you have 250 *differently configured* > PostgreSQL servers on vhosts, and you need to roll out a change in > logging destination to all machines while leaving other settings > untouched. We handle this by having a common postgresql.conf file, with an include at the end for an override file. We can push out a new default everywhere using scp and ssh. (No, we don't run production databases on Windows.) For central machines (those that we don't need to go through the WAN to reach), I've occasionally wished for the ability to reconfigure through a database connection; but the total time saved by such a feature probably would not amount to the time required to read through the syntax definition. Regarding other database products, I know that Sybase reads a configuration file at startup or when a RECONFIGURE command is issued. There is a function called sp_configure to allow changes through a database configuration, or it can be edited directly. When using the function, the old file is renamed using a numeric suffix, to keep a history of configurations. Lines are not commented out, but DEFAULT is used for values without an override. sp_configure without a value specified shows you the existing values with columns for default value, current configuration file value, value currently in effect (since you might not have issued the reconfigure or it might be a startup-only setting), and the RAM required to support the configured value. -Kevin
---------------------------(end of broadcast)--------------------------- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate