> Maybe because you aren't closing each file after you have done your thing > and it remains in memory?
Well I may be wrong but I think since he is using same file handler for each file, new instance is over writing the older one so all files cannot remain opened and hence cannot be in memory. I think here the issue is that the array is saturating the memory. May be he needs to write the data to some temp file and flush the array for each file. Cheers, Parag On Thu, Jan 6, 2011 at 6:59 PM, Robert <sigz...@gmail.com> wrote: > Maybe because you aren't closing each file after you have done your thing > and it remains in memory? > > On 2011-01-06 02:26:13 -0500, Jins Thomas said: > >> --0016364270585953cf049927ffd4 >> >> Content-Type: text/plain; charset=ISO-8859-1 >> >> >> >> Hi experts, >> >> >> >> Have you ever experienced Out of memory problem while using >> >> HTML::TableExtract. I'm having little large html files, still i didn't >> >> expect this to happen >> >> >> >> Would you be able to suggest some workarounds for this. I'm using this >> >> subroutine in another for loop. >> >> >> >> sub zParseHTMLFiles ($$) { >> >> >> >> my ( $lrefFileList, $lrefColNames ) = @_; >> >> my @ldata; >> >> foreach my $lFile (@$lrefFileList) { >> >> my $lTableExtract = HTML::TableExtract->new( headers => >> >> [...@$lrefcolnames] ); >> >> chomp($lFile); >> >> $lTableExtract->parse_file($lFile); >> >> foreach my $ls ( $lTableExtract->tables ) { >> >> foreach my $lrow ( $lTableExtract->rows ) { >> >> chomp( @$lrow[$#$lrow] ); >> >> push( @ldata, $lrow ); >> >> } >> >> } >> >> } >> >> return \...@ldata; >> >> } >> >> >> >> Thanks >> >> Jins Thomas >> >> >> >> --0016364270585953cf049927ffd4-- >> >> >> > > > -- > Robert > > > > -- > To unsubscribe, e-mail: beginners-unsubscr...@perl.org > For additional commands, e-mail: beginners-h...@perl.org > http://learn.perl.org/ > > > -- To unsubscribe, e-mail: beginners-unsubscr...@perl.org For additional commands, e-mail: beginners-h...@perl.org http://learn.perl.org/