Hello All,

Major part of my Perl scripting goes in processing text files. And most of
the times  I need huge sized text files ( 3 MB +) to perform benchmarking
tests.

So I am planing to write a Perl script which will create huge sized text
file of the sample file which it will receive as first Input parameter. I
have following algorithm in mind:

1. Provide 2 input parameters to the Perl script - (i) Sample file, (ii)
Size of the new file
EG: - To create a new file of size 3 MB -
perl Create_Huge_File.pl  Sample.txt   3

2. Read the input file and store the contents into an array.

3. Create a new file.

4. Dump the contents of the above array into the new file.

5. Check the length of the new file. If it is less than second input
parameter, repeat step 4 or else goto step 6.

6. Close the new file.

I have following questions:

a.) What do I need to do to make sure that length of new file will increase
every time the step 4 is executed.

b.) Since lot of I/O is involved is it the most optimised solution? If not,
does any one has any better design to suffice my requirement.

c.) What are the likely bugs that may creep in with this algorithm.

Cheers,
Parag

Reply via email to