Nick Coghlan added the comment: To answer Brett's question about "Why does walk_packages import parent modules?", the answer is "Because __init__ can modify __path__, so if you don't import the parent module, you may miss things that actually doing the import would find".
The classic example of this is pre-3.3 namespace packages: those work by calling pkgutil.extend_path() or pkg_resources.declare_namespace() from the package's __init__ file, and they can be made to work regardless of whether or not the parent module is implemented as "foo.py" or "foo/__init__.py". My recollection is that the pkgutil APIs treat that capability as a basic operating assumption: they import everything they find and check it for a __path__ attribute on the grounds that arbitrary modules *might* set __path__ dynamically. It would potentially be worthwhile introducing side-effect free variants of these two APIs: "pkgutil.iter_modules_static()" and "pkgutil.walk_packages_static()" (suggested suffix inspired by "inspect.getattr_static()". The idea with those would be to report all packages that can be found whilst assuming that no module dynamically adds a __path__ attribute to itself, or alters a __path__ attribute calculated by a standard import hook. Actually doing that in a generic fashion would require either expanding the APIs for meta_path importers and path import hooks or else using functools.simplegeneric to allow new walkers to be registered for unknown importers and hooks, with the latter approach being closer to the way pkgutil currently works. ---------- nosy: +ncoghlan _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25533> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com