Thousand Thanks, John, At lease I get the idea on what I am doing is correct.
"perl -d" is very useful for tracing errors, better than log down activities. and I've test it and found that the "while (! -e $result) is not nesssary, the case is actually A to B, B to C, C to D, and D gen reult, then D ret null to C , C ret null to B, B return null to A, and then the open, print, unlink statements. What about fork ? In such case, would it be better ? or even making more trouble ? I've read about perldoc, but still not very clear, what are the zombie means ? doesn't those vars be removed if the script run once only ? Have a good day =) Connie ----- Original Message ----- From: "John Brooking" <[EMAIL PROTECTED]> To: "Connie Chan" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]> Sent: Friday, April 26, 2002 12:44 AM Subject: Re: Multi thread ? Programming Style ? > See comments below. > > --- Connie Chan <[EMAIL PROTECTED]> wrote: > > Now, I have a CGI script which using GET method to > > receive Query, > > and my script looking like this : > > > > ######################################## > > # C:/PERL/BIN/PERL.EXE > > # script name : QUERY.PL > > > > require "my_external_scripts.pl"; > > > > $query = $ENV{QUERY_STRING}; > > $result = > > > time."_".$ENV{REMOTE_ADDR}.$myPackage::random(99999); > > > > &call_query_string_handler($query, $result); > > while (! -e $result) { 1 } > > > > open (FILE, $result) ; > > while (<FILE>) { print $_ } > > close (FILE); > > unlink($result) > > ######################################## > > > > In case, I am trying to enforcing the script unable > > to terminate > > until the result file comes. And the result file > > will be built when > > all the child process are done. > > > > my questions are : > > > > 1. When 2 or more users are querying the same script > > with the same query > > at the same time, would the script handle it with > > 'another colsole'? or > > involving to wait until the first in query has done > > ? How about the child subs? > > The standard situation is that each request that > comes in starts up a new and completely separate > process, which is what I assume you mean by 'console'. > There is no interaction between them. If your server > is running some additional software such as FastCGI or > mod_perl, then it may not start up a new process, but > I still think that each multiple invocation is > independent of the others. > > So if two users are invoking this script, then both > users will be sitting waiting for the files to come > in. When they do, then both processes and their > subroutines will continue independently of each other. > > > 2. Is it a bad programming style ? Straight line > > running, Sub scripts are working > > as self operating. Main will call A, A will call B, > > B will call C, C will call D, and D > > gen the result, no value will return to 'the > > caller'. I can imagine that might quite > > a problem for tracing errors, so I designed each sub > > script will log down their > > activities on common log file with, A,B,C and D. > > Would it be better ? > > If you are asking about if your use of modularization > (splitting the code up in routines A, B, and so on), > it's hard to say without seeing what all the code > does. For tracing errors, check out the Perl debugger > (perl -d). It will follow calls to different modules, > so that's not a concern. > > In terms of modularization in general, I think I could > probably condense the thought process I apply when > designing my own program structure to the following > guidelines: > > 1) Simpler is better. Introduce no more complexity > than necessary. Additional complexity may be justified > by either of the following considerations. > > 2) As soon as a bit of code is re-used in more than > one place, consider making it a subroutine. If I see a > program that has similar code copied and pasted more > than 3 or 4 times, even with slight variations, I > conclude that its author is a bad (ok, maybe just > inexperienced) programmer. (Up to 3 or 4 times I could > accept as mere laziness.) If you can replace all those > variations with a single subroutine with parameters, > your code will have fewer bugs and your maintenance > will be much easier. > > 3) Even if you are coding something for the first > time, if it is something you could see yourself > re-using, and it's not too much more work to make it > generic, also consider a subroutine (or module, or > complete package, depending on its scope). > > > Any comment and/or hints are very apperciate. At > > lease, please tell me > > how to test it by myself ^_^" > > The perl debugger would be a good start. Type "perldoc > perldebug" at your friendly neighborhood command line. > Alternatively, just use print statements outputting > values of variables at certain points, or just "I'm > here" messages. As you say, you could also write to > log files, although I'd favor that more as a long-term > monitoring solution rather than development-stage > debugging; print statements or the debugger are > easier. > > Hope this helps! > > - John Brooking > > > __________________________________________________ > Do You Yahoo!? > Yahoo! Games - play chess, backgammon, pool and more > http://games.yahoo.com/ > > -- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] > > -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]