Tom Lane <[EMAIL PROTECTED]> writes: > "Philippe Lang" <[EMAIL PROTECTED]> writes: > > > Another solution would be to use cron every 5 minutes, and read the > > content of a table. > > This would probably be better because the cron job could only see the > results of committed transactions. The failure mode in this case is > that the same mail could be sent more than once (if the cron job fails > between sending the mail and committing its update that deletes the > entry in the pending-mails table). But you'd not be wondering why you > got mail that seems not to be related to anything visible in the > database.
I have experience with a system that was implemented this way and we found it was a *huge* win. Mail is often subject to major problems caused by circumstances outside your control. If AOL is unreachable then suddenly you have a crisis when your mail spool fills up and your MTAs become slow responding... Taking the mail generation out of the critical path of the application and into a separate process was an extremely robust approach. It let us shut down mail generation while we emptied queues or reconfigured MTAs without impacting the database or application at all. Incidentally, you can arrange things to fail in either direction. In our case if the cron job failed we would lose a batch of emails, not generate duplicates. I'm not sure if failing by generating duplicates is as convenient for scaling to multiple mail generation processes. -- greg ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://archives.postgresql.org