I have a collection of command line scripts that share a collection of common 
modules. This code collection is for internal use and will run under a single 
version of Python 3.6+ and a single OS. My understanding of best practice is to 
organize this collection of Python files into a folder structure like this:

# common files
.gitignore
readme.md
requirements.txt
setup.py  <--- what is the advantage of this file for internally distributed 
code bases?

# app specific package folders
app-1
    __init__.py (optional; if needed)
    __main__.py 
    app-1-module-1.py
    app-1-module-2.py
    app-1-module-N.py

app-2
    __init__.py (optional; if needed)
    __main__.py 
    app-2-module-1.py
    app-2-module-2.py
    app-2-module-N.py

# modules shared across multiple apps
common
    common-module-1.py
    common-module-2.py
    common-module-N.py

# tests - place at package level with sub-packages for each package -OR- 
underneath each app package?
tests
    app-1
         test_app-1-module-1.py
         test_app-1-module-2.py
         test_app-1-module-N.py
    app-2
         test_app-2-module-1.py
         test_app-2-module-2.py
         test_app-2-module-N.py

# virtual env folder placed at same level as packages ???
venv
    <virtual-env files go here>

And execute each app via the following ...

python -m app-1 <optional-parameters ...>

Questions

1. Does the above structure sound reasonable?
2. Where to place virtual env files and what to call this folder? venv, .env, 
etc?
3. Where to put tests (pytest)? In a tests folder or under each package?
4. Use a src folder or not? If so, where to put above files relative to the src 
folder?

Malcolm

-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to