Thanks all.
How about the files parsing with huge size (about 1T of each day)?

The basic logic is:

reach each line of every file (many files, each is gziped)
look for special info (like IP, request url, session_id, datetime etc).
count for them and write the result into a database
generate the daily report and monthly report

I'm afraid perl can't finish the daily job so I want to know the speed
difference between perl and C for this case.

// Xiao lan

-- 
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/


Reply via email to