arn...@skeeve.com wrote: > Things are better at the moment (it's ~ 2:30 AM east coast time). > But... Although an https clone no longer pegs my CPU at 100%, it still sucks: > > $ time git clone https://git.savannah.gnu.org/r/gawk.git > Cloning into 'gawk'... > Fetching objects: 61396, done. > > real 11m34.265s > user 0m39.351s > sys 0m6.825s
There are two problems with the above. One is that of course the https:// protocol has more overhead and must compete with all of the abuse agents that are always hammering on the web service. It's probably never going to be as fast as using the ssh:// protocol. But two is that the above is using the raw web file access URL. Why? That's never going to be as efficient as the git smart http protocol. That's intentionally choosing the worst possible source to clone. Example from my system in Colorado which should be a fair test going across the net from a distance using the git smart http protocol. rwp@madness:/tmp/junk$ time git clone https://git.savannah.gnu.org/git/gawk.git Cloning into 'gawk'... remote: Counting objects: 61396, done. remote: Compressing objects: 100% (14478/14478), done. remote: Total 61396 (delta 48459), reused 58396 (delta 46267) Receiving objects: 100% (61396/61396), 66.18 MiB | 2.61 MiB/s, done. Resolving deltas: 100% (48459/48459), done. real 1m11.502s user 0m41.194s sys 0m0.970s I have sometimes thought that we should remove the /r/ raw access path. But as you know breaking something that people are using upsets them because it breaks something they were working using. At one time I believe the /r/ path was needed for people behind restrictive firewalls and broken http proxies to be able access the repository regardless of their bad environments. During the http:// days. But now with https:// use one can't proxy through and that already blocks bad corporate http proxies. And bad firewalls already are unhappy that they can't see into encrypted https connections. Which makes this very odd niche access method pretty much obsolete. Bob