On Mon, Feb 22, 2021 at 02:32:59PM -0700, Bob Proulx wrote: > Viktor Dukhovni wrote:
> > If it is not an emergency, and it was working fine before the change, > > generally > > best to let the change take place incrementally. You can reduce the latency > > by reducing $max_idle to ~5s and perhaps take $max_use down to ~20 from 100. > > This touches upon something that I have never understood very well. > And I think it explains behavior I have seen that has confused me and > led me to think that something different was happening. And it all > centers around on when one must reload, or restart, or do nothing. > > If I change the main.cf file then I think it is an _of_course_ that I > must then "postfix reload" for the change to take effect. All good. Actually, you often don't need to reload then either. Reload is only required when your configuration changes are relevant to the long-running Postfix processes, e.g. on my system: PID ELAPSED PPID COMMAND 46433 8-14:11:17 1 /usr/local/libexec/postfix/master -w 41540 5-16:24:42 46433 qmgr -l -t unix -u 41545 5-16:24:37 46433 tlsmgr -l -t unix -u 97404 26:56 46433 pickup -l -t unix -u Otherwise, routine main.cf changes are also picked up automatically as short-lived processes (smtpd(8), cleanup(8), delivery agents, ...) respawn. > But here is the case where confusion happens. A mail server of mine > has eleven db files of the standard "default_database_type = hash" > type of files on disk. (I use a Makefile to handle updates.) > > Let's say I change one of those tables, run postmap, and then the most > natural thing in the world is to *immediately* test the result of the > change. It seems to appear as if updating the db files are not > sufficient to cause this to happen immediately. Correct, some extant smtpd(8) or cleanup(8) process may handle your impatient request. > And therefore I think, should I be reloading afterward? Should I be > restarting? Patience is rumoured to be a virtue, but for the impatient... > Of course if I restart then the change is pushed through. And I have > never quite understood reload in this context. Is reload enough? > Since reload seems to be needed, because it doesn't always work > immediately otherwise. It's this behavior that has trained people > like me and probably others that more is needed. Reload is a graceful restart of just the worker processes at the completion of any current "request", and before accepting new work (including new TCP connections, ...). > And then I read the above and I see the comments. Here my values are > defaulting to these. > > $ postconf max_idle max_use > max_idle = 100s > max_use = 100 > > Everything works so well generally that I have been blissfully unaware > of how this part of the machinery works. But now I see that after > updating tables the running deamons will still be attached to the > previous data of that file for 100 seconds. Actually, not necessarily, with indexed file databases Postfix will generally be able to detect that the file has changed, and will automatically emulate a "reload" of just the processes using that database, and not e.g. the queue manager (which generally does not directly open/read any external database files). > Perhaps expiring sooner if they hit 100 uses first. And therefore > should I be learning that after updating tables I should wait > $max_idle seconds, 100 seconds by default (let's say 2 minutes), > before testing? Is that correct? Well, $max_idle * $max_use is the upper bound (9999 seconds) if connections are spaced 99.99s apart. :-) > Is there a difference if the tables are backed by a MySQL or Postgres > database? Changes in *SQL and LDAP are observed immediately, Postfix does not have any access to stale data from these sources. > Do I need to be aware of which tables are used by which daemon in this > decision making flow? Mostly not. The main thing to keep in mind is that delayed visibility is primarily a feature of non-database tables: - PCRE tables - CIDR tables - texthash tables - one-line-per-time external file match lists The stuff that Postfix loads into memory, rather than queries as a database. > Just knowing that table updates require a $max_idle maximum TTL before > ensuring their effect (if that is correct) before testing the change > would be good learning. I could probably avoid reloads entirely for > table updates moving forward in this case. And then only reload for > main.cf and master.cf file updates. - For *SQL and LDAP, change is immediate. - For indexed tables, when switching to a new client connection. - For flat files (CIDR, PCRE, ..., main.cf), worst case $max_idle times $max_use, but typically of course processes are either busy and so time to process $max_use requests, or else idle and then $max_idle. It is rare, to see many widely spaced connections, that are just under $max_idle apart. -- Viktor.