I have been tasked to parse ~1500 log files summing a total of ~100mb
cleanly, effectively and with little impact on the system.
The current process uses a korn shell and greps ALL ~1500 files or certain
patterns and sends the output to another file were later in the script it is
greped again! WOW talk about ugly! timex on this process during SLT (System
Load Tests) had it jumping from ~0.30 to ~15.32!! holy be Jesus!
I want to redo this in perl, maybe thread it? maybe parse #logs / 3 at a
time? thus throwing the 3 greps on 3 different cpus, hopefully reducing
overhead etc..
What are some recommendations? I am a novice at best at perl. If you have a
recommendation could you possible post some starter code? I can handle the
regex and parsing.. just not sure how to handle the HUGE number and SIZE of
the files cleanly, effectively and with minimal system impact
Regards,
Ron