Às 11:33 de 17/04/22, wilson escreveu:
hello the experts,
can you help check my script for how to optimize it?
currently it was going as "run out of memory".
$ perl count.pl
Out of memory!
Killed
My script:
use strict;
my %hash;
my %stat;
To be honest you don't need the %stat, however
On Sun, 2022-04-17 at 17:33 +0800, wilson wrote:
> hello the experts,
>
> can you help check my script for how to optimize it?
> currently it was going as "run out of memory".
>
> $ perl count.pl
> Out of memory!
> Killed
I would use a database like Mariadb for this, not only to create a
repor
On Thu, 21 Apr 2022 07:12:07 -0700
al...@coakmail.com wrote:
> OP maybe need the streaming IO for reading files.
Which is what they were already doing - they used:
while () {
...
}
Which, under the hood, uses readline, to read a line at a time.
(where "HD" is their global fileh
OP maybe need the streaming IO for reading files.
Thanks
On 2022-04-21 21:56, David Precious wrote:
> On Thu, 21 Apr 2022 17:26:15 +0530
> "M.N Thanishka sree Manikandan" wrote:
>
>> Hi wilson
>> Try this module file::slurp
>
> Given that the OP is running into memory issues processing an 80
On Thu, 21 Apr 2022 17:26:15 +0530
"M.N Thanishka sree Manikandan" wrote:
> Hi wilson
> Try this module file::slurp
Given that the OP is running into memory issues processing an 80+
million line file, I don't think suggesting a CPAN module designed to
read the entire contents of a file into mem
Hi wilson
Try this module file::slurp
Regards,
Manikandan
On Sun, 17 Apr, 2022, 15:03 wilson, wrote:
> hello the experts,
>
> can you help check my script for how to optimize it?
> currently it was going as "run out of memory".
>
> $ perl count.pl
> Out of memory!
> Killed
>
>
> My script:
> u
I am not sure, but can Tie::Hash etc be used by tying hash to a local file
to reduce the memory use?
regards.
Hi Wilson,
Looking at the script I see some room for improvement. You currently
declare %hash as a global variable, and keep it around forever. With tens
of millions of rows that is quite a large structure to just have sitting
around after you have build the %stat hash. So I would start by limitin
I see nothing glaringly inefficient in the Perl. This would be fine on your
system if you were dealing with 1 million items, but you could easily be
pushing up against your system's limits with the generic data structures
that Perl uses, especially since Perl is probably using 64-bit floats and
int