------------------------------------------------ On Thu, 28 Aug 2003 23:19:21 +0300, "Octavian Rasnita" <[EMAIL PROTECTED]> wrote:
> But why forking more processes? > Right, but the same can be asked for the below... > The cgi program might check which of the files need to be deleted, then > create a temporary lock file, then it could fork a process that will delete > those files. > The next visitor will come and execute the same cgi script, but it will see > the lock file and it won't delete any file. > Why have the scripts check for a lock file, etc. when their job really isn't maintenance of the system, their job is passing back HTML like stuff? Especially on high traffic sites. > In fact, if the web site has many visitors, the script could be put on a > script which is not so often executed by all visitors. > Or that script could check and start deleting files only after a period of > time, let's say... 10 minutes, 1 hour... etc. > But then you are back to a "cron-like" system, but it is random because it still depends on a user appearing which is unpredictable. I still hold that a scheduler is the best way to do this type of thing, unless the time taken to recompile/reinterpet the script is significant (aka the schedule is so frequent that it is cheaper to leave it in memory), but it still gets back to the design issue of why you should need to do this anyways (at least for a website)... http://danconia.org > ----- Original Message ----- > From: "drieux" <[EMAIL PROTECTED]> > To: "cgi cgi-list" <[EMAIL PROTECTED]> > Sent: Thursday, August 28, 2003 6:58 PM > Subject: Re: automated file removal / cache clearing > > > > On Wednesday, Aug 27, 2003, at 14:22 US/Pacific, Octavian Rasnita wrote: > > > Or if you don't want to depend on Unix's cron and want your program to > > do > > everything, you can set it so each time a new visitor comes to your > > site, > > checks which files are not needed, and delete them. > > You can use fork to avoid putting the visitors to wait until the > > program is > > doing its background job. > [..] > > at first blush that CAN seem to be an interesting > idea - but in the worst case one can have N connections, > each of which has generated N forked children to > walk through M possible files... and one starts > asking one's self, > > is this an order N square or N factorial solution? > > while in the worst case the cron job based solution > is merely an order N problem... > I am glad someone put this into easily understood terms, I was thinking the same thing and couldn't come up with a compact way of saying it :-)... -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]