Hi Nelson

Nelson Ray wrote:
> Just as a little background, I am working on a BioInformatics program
> that runs on large (about 300 meg) text files.  I am using a
> filehandle to open and load it into an array.  Then I use the join
> command to read the array into a scalar variable in order to be in a
> workable form for my computationally intensive program.

My first thought is that you're wasting space here by duplicating the
file's contents into both the array and the scalar. Just read directly
into the scalar (by enabling 'slurp' mode) like this:

    my $contents;
    {
        local $/;
        open FILE, "< file.txt";
        $contents = <FILE>;
        close FILE;
    }

> The problem,
> however, is that the machine I am working with only has 256 megs of
> RAM.  I have split the files into 50-60 meg chunks, but my program
> still uses all the available physical memory.

If you can split it into chunks, then you can read and process the
entire file in chunks from Perl. It's very unusual to need all of the
file in memory at one time, and if you can go as far as processing
individual lines then just use:

    open FILE, "< file.txt";

    while (<FILE>)
    {
        :                # $_ is the current line
        :                # $. is the current line number
    }

    close FILE;

> I am quite new to perl
> and do not know any other methods of working with the data other than
> to make it a scalar variable, which requires loading it into memory.

There are many ways. Can you tell us more about the problem that you
need to solve, then we'll be able to help better?

> Does anyone have any solution to my memory woes?  I would greatly
> appreciate your help.  Thanks a lot.

Cheers,

Rob




-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to