>>>>> "Bg" == Bob goolsby <bob.gool...@gmail.com> writes:

  Bg> On Fri, May 27, 2011 at 5:56 AM, Ramprasad Prasad 
<ramprasad...@gmail.com>wrote:
  >> I have a requirement of generating  large number (> 100 thousand ) files
  >> ....
  >> 

  Bg> rather suspect that the limiting factor is going to be I/O.

  Bg> Even with massive assistance from the Operating System, writing to
  Bg> a disk will take a long time, measured in milliseconds.  If you
  Bg> are not doing much processing other than string-replacement, your
  Bg> process-time will be relatively short, probably a few microseconds
  Bg> per file.

  Bg> You may get better throughput by running multiple processes.
  Bg> Batch up your input files and run several instances of your code,
  Bg> one per batch.  If you can't use divide-and-conquer, you might
  Bg> look at fork() and try running multiple child-processes under the
  Bg> covers.

if the problem is i/o bound, then how would multiple processes help?
they only help if you are cpu bound and have actual multiple processors
(on one or multiple boxes). the i/o bandwidth on a single box is the
same for one or a bunch of processors.

i recommended Template::Simple which is the fastest templater out
there. if the OP also uses File::Slurp to read/write the templates and
results, that will optimize the I/O as well. in fact Template::Simple
does use File::Slurp to read in templates but it doesn't write the
results to a file but returns them to the caller for speed an
flexibility.

uri

-- 
Uri Guttman  ------  u...@stemsystems.com  --------  http://www.sysarch.com --
-----  Perl Code Review , Architecture, Development, Training, Support ------
---------  Gourmet Hot Cocoa Mix  ----  http://bestfriendscocoa.com ---------

-- 
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/


Reply via email to