On 15.11.2012 13:38, Stefan Fuhrmann wrote: > On Thu, Nov 15, 2012 at 12:36 PM, Branko Čibej <br...@wandisco.com> wrote: > >> On 12.11.2012 19:46, Ivan Zhakov wrote: >>> On Mon, Nov 12, 2012 at 10:27 PM, Bert Huijben <b...@qqmail.nl> wrote: >>>> Any idea why a 1.8 client would use more than twice the amount of >>>> data of 1.7? It should send out less requests than a 1.7 client; >>>> especially to a 1.8 server where we avoid property requests. >>> svn 1.8 uses raw GET for fetching files, so it downloads uncompressed >>> unless you have mod_deflate enabled. While neon uses svndiff format >>> for transmitting files content which self-compressed. >> I don't buy that argument. Generating svndiff takes CPU, too. What's >> more, the simplest kind of svndiff is just a "new" op and >> zlib-compressed data, effectively having the same characteristics as >> mod_deflate. >> >> Why would mod_deflate use more CPU cycles per compression ratio than >> svndiff1? Unless you're testing with mod_deflate compression level set >> to 9, which would be silly for this kind of stream compression. >> > My guess / speculation without looking at the code: > > Neon: (txdelta against empty; almost nop) -> zip -> base64 -> send > Serf + deflate: base64 -> zip -> send > > So, in neon's case, base64 is applied to less data > then in the serf case. Since base64 inflates the > data buffer by 1/3, serf also needs to zip more > data then neon. The total CPU overhead would be > somewhere between 30 and 40%.
You may have a point there. The next question is, why would anyone want to base64-encode a response to a simple GET? Seems like unnecessary work for no good reason. -- Brane -- Branko Čibej Director of Subversion | WANdisco | www.wandisco.com