Meador Inge <mead...@gmail.com> added the comment:

I just noticed this problem as well.

I don't know the code well enough to determine if Brian's patch is the
right thing to do.  The documentation claims that maxtries is used to
put a limit on recursion: 
http://docs.python.org/dev/library/urllib.request.html#urllib.request.FancyURLopener.
  
The way the code is written it is limiting recursion, but the recursion count
is *not* being reset when an exception is thrown from 'redirect_internal'.

I see why this is a problem with our test code.  'Lib/test/test_urllib.py' 
has the following helper function:

_urlopener = None
def urlopen(url, data=None, proxies=None):
    """urlopen(url [, data]) -> open file-like object"""
    global _urlopener
    if proxies is not None:
        opener = urllib.request.FancyURLopener(proxies=proxies)
    elif not _urlopener:
        opener = urllib.request.FancyURLopener()
        _urlopener = opener
    else:
        opener = _urlopener
    if data is None:
        return opener.open(url)
    else:
        return opener.open(url, data)

Notice that the 'FancyURLopener' instance is cached in a global variable.  
The fact that the same instance is used from run to run causes max tries to 
be overrun.  If resetting maxtries on the exception path isn't safe, then we
can just remove the caching from the tests.  The more I think about it, the
more Brian's patch seem correct, though.

Senthil, can you chime in?

----------
nosy: +meador.inge
stage:  -> patch review

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue12923>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to