Given the message below, if I fork a process inside a content filter (say in python or perl), so I can return the message back to postfix faster (and end the content pipe fast with a success exist code), will this have any impact on postfix?
> We have a custom content filter in place. During the execution of this filter > we create a set of files, per message, for the purpose of being processes > after the filter is finished. The goal in that was to get the mail back into > postfix ASAP. > > In the background we have a cronjob that goes through the sets of files that > are created via the filter. It is pretty inefficient as it processes these > files one at a time in sequential order. We are in the process of > streamlining that and have created an application to process the sets one at a > time, given the set filename, so we run these in parallel. This cronjob runs > on a separate server to reduce load on the postfix boxes. The problem with > the cronjob, or any of the process jobs in general, is that this is all on a > NFS cluster and we spend most of the disk time searching folders for files to > be processed. > > What I would ideally like to do is to call the new pipeline at the end of the > content filter as a background process. I had first intended to just do > "process.sh > /dev/null &" in order to make it a background process. > Alternatively I could issue a fork inside the process application and call it > like a normal file. I'm not sure what impact either of these will have on > postfix since it's kicked off from postfix. If this process that is kicked > off fails, I still have backup cronjobs that walks the file system. The > process we are talking about here is a TCP connection to a separate server > that will listen, but may have a delayed backlog, but shouldn't take any real > CPU and limited memory, just time. > > Given this, I'm not even sure if I should even attempt to do it at the end of > our filter. Anyone have any thoughts on this approach?