"bukzor" <workithar...@gmail.com> wrote

Let's walk through it, to make it more concrete:
  1) we have a bunch of scripts in a directory
  2) we organize these scripts into a hierarchy of directories. This
works except for where scripts use code that exists in a different
directory.
  3) we move the re-used code causing issues in #2 to a central 'lib'
directory. For this centralized area to be found by our scipts, we
need to do one of the following
     a) install the lib to site-packages. This is unfriendly for
development, and impossible in a corporate environment where the IT-
blessed python installation has a read-only site-packages.
     b) put the lib's directory on the PYTHONPATH. This is somewhat
unfriendly for development, as the environment will either be
incorrect or unset sometimes. This goes double so for users.
     c) change the cwd to the lib's directory before running the tool.
This is heinous in terms of usability. Have you ever seen a tool that
requires you to 'cd /usr/bin' before running it?
     d) (eryksun's suggestion) create symlinks to a "loader" that
exist in the same directory as the lib. This effectively puts us back
to #1 (flat organization), with the added disadvantage of obfuscating
where the code actually exists.
     e) create custom boilerplate in each script that addresses the
issues in a-d. This seems to be the best practice at the moment...

Disclaimers -

1. I don't know if this will solve your problem.
2. Even if it does, I don't know if this is good practice - I suspect not.

I put the following lines at the top of __init__.py in my package directory -
   import os
   import sys
   sys.path.insert(0, os.path.dirname(__file__))

This causes the package directory to be placed in the search path.

In your scripts you have to 'import' the package first, to ensure that these lines get executed.

My 2c

Frank Millman


--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to