thanks everybody.
I think the best practice is quite clear.
On May 16, 1:26 am, Álvaro Justen [Turicas] <alvarojus...@gmail.com>
wrote:
> On Fri, May 15, 2009 at 4:30 PM, AchipA <attila.cs...@gmail.com> wrote:
> > I did a fair amount of testing on a couple of systems and the numbers
> > say that in real life, it's quite a significant gain except for some
> > very specific cases. I'd say the article you quoted concludes the
> > same. Even if the latency does not improve for ONE client, the next
> > one will benefit as the *server's* network throughput increases.
>
> I agree. Even the size or time is the same in:
> uncompressed data (server) ---> (cliente) read data
> and
> uncompressed data ---> compress (server) ---> (cliente) uncompress --> read
> data
>
> For client it can change anything (in the worst case: if
> compression/uncompression time is high so 2 times above are equal) but
> it for server it can "save" bandwidth with "lost" of processing time.
> As processors are cheaper than traffic, it is a good thing.
>
> --
> Álvaro Justen
> Peta5 - Telecomunicações e Software Livre
> 21 3021-6001 / 9898-0141
> http://www.peta5.com.br/
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"web2py Web Framework" group.
To post to this group, send email to web2py@googlegroups.com
To unsubscribe from this group, send email to
web2py+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/web2py?hl=en
-~----------~----~----~----~------~----~------~--~---