Thomas De Contes wrote: > Bob Proulx a écrit : > > Please try the https protocol instead. Does that work better? > > I suppose the 2nd possibility is the right one, and the problem did > not recur, so I can't say what it would have been like.
It's not the typical problem being reported. If https works then keep using it. > If you think that we should always avoid http because it can be corrupted, > then: > > 1 > I fixed the link given here: > http://svn.savannah.gnu.org/viewvc/rapid/branches/gtkada-2.24/README?view=markup > (to https://savannah.nongnu.org/projects/rapid/ ) > Am I right ? Yes. I think it is *safer* to always document https now. However I am opposed to blocking http. Let me explain. There are those who wish to actively block http and only allow https. But just because something can be broken for some cases does not mean that it is always broken in all cases. And it does not mean that we should actively break those who need http access. This is just me typing extemporaneously and I might write something wrong since but... * If your clock has failed and you boot without a good time source then it is likely that you will be unable to contact any https sites. Because the time will be different to the point that the certificate will not validate. * Likewise for mandatory DNSSEC. DNS Security Extensions. If the time is too far wrong then it can't validate the DNS entry. * If you are behind a blocking firewall then https may be impossible to use. In which case http may be the only available protocol. There are many who must exist behind these firewalls and it would be a tragedy to block them from access to Free Software out of a misplaced sense of trying to protect others. First do no harm. * Not every peek at documentation or every software download requires a theoretical level of life or death security! So I think it is okay and good to document https as the primary protocol and to encourage it. But I don't want to block http. Because if someone needs http then they should be able to recognize their need and use it as needed. I don't think we need to go to extremes of documenting every use of https as also have a potential fallback to http either. That would be too much noise and clutter everywhere. It would be like documenting the use and purpose of the shell's command line IFS variable *for every command*. It is definitely in use every time we type in any command but if someone attempted to document every possible thing at every possible place then that documentation quickly becomes impossible to use in practice. > 2 > Why does the server redirects to https only when we are logged in? > And when we clic on "Browse Sources Repository" (and links that > point out of the sub-domain), it goes on http even from https. Probably because it wasn't noticed before. Developers are always logged in! For example when I visit I remain in https. But when I test this now I do see that it defaults to http for the standard address. This can be changed in the "Select features" in the admin page. But it would be somewhat tedious to update all of the links manually. I myself have done very little Savannah web UI development. Maintenance has fallen to the very few who work on it. This would be an excellent area for contributed patches! > > Also the past day and a half has had some problems with memory > > exhaustion due to external influence starting a git clone then either > > dropping the connection or getting dropped due to networking issues. > > > However its Emacs and the repository is large and the resulting git > > pack-objects process consumes 800 MB of active RAM before deciding to > > write anything to the closed file descriptor and then exiting. That's > > been a problem. > > Yes, it seems to be a problem. > I was thinking about migrate from subversion to git, but maybe it's > too soon? What do you think about that? Such a question! When I meet parents with several children, I usually avoid asking them, "Who is your favorite child?" :-) I don't think it is a matter of "soon" or "too soon". Git is very stable and mature. It is used by thousands. It's fine. I use git myself and I like git. But I also know that many people do not like git. They much prefer svn or hg instead. And in some workflows there are advantages. Like any benchmark everything has a sweet good spot and also a bad worst case. It is not a matter of time. It is a matter of features and work flow and what you want to use. The choice is yours. > > We have mitigation in effect now to detect those as > > quickly as practical and kill them as they are occurring. > > Thank you for having fix it. :-) Unfortunately the Internet is a hostile place. 8 billion people in the world and all "doing stuff". Some of it on the Internet. Some good, some bad, some indifferent. It is a continuous process of reacting to abuse. And the type of abuse is always changing. > >> And when I reloaded it, I got : > >> > >> An error occurred while reading CGI reply (no response received) > > > > This seems much more likely. As that is basically a 503 Bad Gateway > > due to the backend not being able to load in time due to memory > > stress. It eventually loads but not before the timeout which is > > already quite long. > > Ok, I think too. > As said, the problem did not recur, so I think that this time the problem was > on your side. We do appreciate problem reports such as 503 Bad Gateway errors. Because we don't want those to happen. But it is the squeaky wheel that gets the oil. We are all volunteers and when there isn't anything squeaking we tend to be working on out $DAY_JOBS and not otherwise. It is okay to make noise when there are problems. That's how things get fixed. It's okay to do this! :-) > > I worry the web browser is caching some result. Please the next time > > you have this problem try using a command line tool such as wget or > > curl which avoids web browser cache issues. For example: > > > > wget -O- -q -S > > 'http://svn.savannah.gnu.org/viewvc/rapid/trunk/gtk_peer/?pathrev=3' > > > > Does that work when at the same time the browser does not? > > Thank you, I keep it for the next time. :-) There I intentionally said http since we were discussing http proxies and such. But that command with https should work the same. If they ever work different then that is also useful information. Both are possible to see the same _somewhat_random_ 503 Bad Gateway errors at the same rate. They are the same backend processing. But if one always works and the other always fails then that is a transport protocol issue between. The process of debugging a problem is one of dividing the problem up into smaller parts and trying to identify each part separately. Bob
signature.asc
Description: PGP signature