On Nov 27, 11:42 am, Viktor Kerkez <[EMAIL PROTECTED]> wrote: > Here is the situation: > > $ ls > test > $ cd test > $ ls > __init__.py data.py > $ cat __init__.py > > $ cat data.py > DATA = {} > > $ cd .. > $ python>>> import os > >>> from test.data import DATA > >>> DATA['something'] = 33 > >>> os.chdir('test') > >>> from data import DATA as NEW_DATA > >>> DATA > {'something': 33} > >>> NEW_DATA > > {} > > Is this a bug?
No, because you've actually imported two different modules (even though it's the same file on disk, for the second inport statement Python imports the file again because it was done from a different absolute path, that is, the module "data" is always different from the module "test.data" even if they refer to the same file). However, I'm not so sure the effect of os.chdir() on the import path is a good idea. It would seem that among people who use os.chdir() in their programs, some will want the import path to change with it, some will not. You can't please everyone, so I would suggest that we should choose in favor of limiting context sensitivity. I like to think that "import abc" always does the same thing regardless of any seemingly unrelated state changes of my program, especially since, as the OP pointed out, import is used as a means to ensure singleness. Thus, if I were designing the language, I would have sys.path[0] be the current working directory at program start. To the OP: My first suggestion is to consider not using os.chdir() in your programs, and instead construct pathnames with the directory included. If you do use os.chdir(), then early in your script script, add a line such as "sys.path[0] = os.getcwd()". Then, no matter where you are, always import the file relative to the starting directory. So always use "from test.data import DATA", even after you os.chdir(). Carl Banks -- http://mail.python.org/mailman/listinfo/python-list