On Tuesday 08 May 2001 17:41, Andreas Rabus wrote:
> >Not only will it not report the size of the http headers, but it won't
>
> report
>
> >the TCP and IP frame information and any ICMP messages that may be
>
> required.
>
> >What is the problem with automatically sucking the sizes out of webalizer
> >files and reporting them in some other format?
>
> the answer ist simple: Paranoia. :)
> Webalizer crashed several times and we lost all statistics (didn't keept
> the lof-files so long).
That's bad. No backups?
> And i dont't like to mess around in HTML-Code that isn't written by me.
Good point. Maybe an addition to webalizer to make it produce a plain text
or csv file with such data would be a good idea. If it just appended a line
to the file for each run then a crash wouldn't lose anything.
> Back to questioning:
> recently i did some calculation and find out that webalizer results are
> about about 85% of the net-acct results.
> Ist that an realistic overhead form http-headers, ICMP (on or to port 80?),
> and TCP/IP frame info, etc.?
It depends on the type of data. If you are running a mirror of
ftp.debian.org and sending it all out by HTTP then 15% overhead sounds a
little high. If you have lots of small files (<2K) then you could easily
have more than that.
> PS: we pay for the traffic "on the cable" and webalizer only gets the
> "pay-load" from http.
True. But you could just price things accordingly.
--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/ Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]