On 2/04/19 1:56 PM, Cameron Simpson wrote:
On 02Apr2019 13:14, DL Neil <pythonl...@danceswithmice.info> wrote:
On 2/04/19 11:57 AM, Cameron Simpson wrote:
On 29Mar2019 09:32, DL Neil <pythonl...@danceswithmice.info> wrote:
Do you 'keep' these, or perhaps next time you need something you've 'done before' do you remember when/where a technique was last used/burrow into 'history'?
(else, code it from scratch, all over again)

I didn't say it before, but I _loathe_ the "files with code snippets" approach. Hard to search, hard to reuse, promotes cut/paste coding which means that bugfixes _don't_ make it into all the codebases, just the one you're hacking right now.

...and I didn't say it earlier, but *my* unit-of-code in this conversation is a function/module and maybe a whole class. However, that's me, and directly relates to my background with "subroutine libraries" etc.

I'm not sure how much code can still be defined as a "snippet"? This idea - how small is too small, plus how large is too... was intended to be a part of the conversation...

As far as having unlabelled units of code, describable only as 'a bunch of lines which go-together': I agree, the cost of searching probably exceeds most peoples' will to live. I guess it is like many other decisions: do I spend time now, in the hopes of saving time later...

Yes, the inevitable evolution of this units of code, creates a maintenance issue in-and-of itself (another conversational topic I was hoping ppl might explore). Personally I don't trust Git sufficiently, and have started adding version numbers to the fileNMs so that whereas system-X was written to import widgetv1.py, I can update it to widgetv2.py as part of system-Y's dev process, and add the possibly/not needed update to system-X's backlog.
(well, best laid plans of mice and men...)

However, in my last greenfield project, everything include-d was taken as-is and not 'improved'. That might indicate a degree of 'maturity' and RoI. Alternatively, that the project was not particularly demanding (it was not!)

The other thought I have, in my learning to embrace OOP later in life, is that if the 'snippet' is a class, and extensions are needed, perhaps a sub-class will side-step such issues, per your warning?
(more best-laid plans, hopes, dreams...)


I put them into modules for import. I've got a tree of Python modules named "cs.this", "cs.that" for various things. Then just import stuff like any other library module.
Agreed - but Python doesn't (natively) like imports from a different/parallel/peer-level dir-tree. (see also Apache httpd)

Well, you have several options:
- modify $PYTHONPATH
- make a virtualenv
- make a symlink; Python imports from the current directory - not my  favourite feature, but very handy in the dev environment

Yes, thanks for this. Ideas tossed-around earlier.

As you will have experienced, until recently virtual, Python environments had their disadvantages and quirks. I was one of the grumblers.

What came along before the recent Py3.? improvements, was VirtualBox. This became my preferred virtual modus-operandi, because it very cleanly separates one client/department/project from another (and separates non-Python components, eg databases and entire file systems). Thus, also Python appv2 from v1! If I've managed to convince the client to also run their live-system under VBox, then most of the delivery/memory problems discussed 'go away'! Sadly nirvana lies just out-there, somewhere beyond tomorrow... (many of these systems sit on a central 'server' somewhere, likely sharing a central DB and/or source-data - which is a bit different from desktop systems delivered to nn-hundred users; plus web-server projects which are similarly 'centralised')


For personal projects (if they aren't just part of that tree) I just need to symlink the tree into the local Python library as "cs".
I was doing this.
This works really well for me. In particular, in my development tree I symlink to my _dev_ "cs" tree, so that if I modify things (fix a bug, add a feature etc) to a cs.* module the change is right there in the repo where I can tidy it up and commit it and possibly publish it.

That ability to update the 'repo' as required by the application-dev task, is key.


Much of my work is a simple-delivery, ie copying code from my dev m/c to the client's PC or server - so I don't 'package it up'* because of the (perceived) effort required cf the one-to-one (or maybe a couple) machine relationships.
I'm doing something similar right now, but where I've used a cs.* module in their code it will go through PyPI as part of the prepare-for-deploy process so that they can reproduce the build themselves.

The last step is one I can avoid. In fact, most clients are keen for me to do all the computer-technical-stuff, so they can concentrate on their research/numbers... ("one-off tasks", discussed earlier)

However, just because it doesn't affect *me* today, still offers a learning experience!


In my case the deploy.sh script makes a directory tree with a copy of the released code (from a repo tag name - "hg archive" for me, there's an equivalent git command).  That tree contains a prep.sh script which runs on the target machine to build the virtualenv and likewise the javascript side which is used for the web front end.
So deploy for me is:
- get the code ready (committed, tagged) at some suitable good phase
- run "./deploy.sh release-tag" which makes a deployable tree
- rsync onto the target (into a shiny new tree - I'll twiddle a symlink  when it goes "live")
- run "./prep.sh" in the target, which fetches components etc
The advantage of the prep.sh step is that (a) the target matches the target host (where that matters, for example the local virtualenv will run off the target host Python and so forth), and _also_ (b) it serves as doco for me on how to build the app: if prep.sh doesn't work, my client doesn't have a working recipe for building their app. I still need to add some prechecks to the deploy, like "is the venv requirements file commented to the repo" etc.
However, during 'delivery' to another machine, have to remember to rsync/copy including the symlink, as well as to transfer both dir-trees.
By making a local virtualenv and getting my modules through PyPI (pip install) this issue is bypassed: there's the client code library (rsynced) and the third party modules fetched via PyPI. (Not just my cs.* modules if any, but also all the other modules required.)

This makes perfect sense. I'll have to sit down and map-out how it would work/what would be needed, to (re-)implement a recent project, by way of comparison. Thanks!


Recently, stopped to organise the code into (more) modules (as also suggested here) and moved to adding the 'utilities' directories into PYTHONPATH. Now I have to remember to modify that on the/each target m/c!
Script it. Include the script in the rsync.

But (devil's advocating), surely the PYTHONPATH approach is more 'pythonic'?


If I get something well enough defined and sufficiently cleaned up for use beyond my own code (or just good enough that others might want it), up it goes to PyPI so that it can just be "pip install"ed. So I've got this huge suite of modules with stuff in them, and a subset of those modules are published to PyPI for third party reuse.
Am dubious that any of the 'little bits' that I have collected are sufficiently worthy, but that's 'proper' F/LOSSy thinking!

If you're reusing your little bits then they need organisation into modules so that you can include them with the main code without treading on others' namespaces.

+1


Publishing to PyPI is a very very optional step; the critical thing is breaking stuff up into modules; rsyncing them is then easy and you have a common codebase which _were formerly_ snippets for reuse.

Plus client concerns!

Might a private GitHub substitute for PyPI?
(at the expense of the convenience of pip...)


Also, putting them in modules and using them that way forces you to make you snippets reusable instead of cut/patse/adaptable. Win win.

+1


*will be interested to hear if you think I should stop 'being lazy' and invest some time-and-effort into learning 'packaging' options and do things 'properly'?professionally...
There's real pain there. The first time I did this (2015?) I basicly spent a whole week right after new year figuring out how to make a package and push it up; the doco was fragmented and confusing, and in some cases out of date, and there is/was more that one way to do things.

...as above. However, I wanted to learn, so I asked...


These days things are somewhat cleaner: start here:
  https://packaging.python.org/

Thanks!
(added to my reading list)


One of the points which intrigue me is that my colleagues don't keep snippets/a library, preferring to remember (hah!) when/where they used particular techniques in the past, and copying/duplicating, to fit the new system's requirements. Am wondering if this is true beyond our little band?
Among random people I suspect it is quite common. In larger teams you accrue a utilities library which contain this stuff.

Thank you! Yes, that simple observation probably explains quite a few of the different views with which I have challenged my (younger/more-recently trained/OOP-native) colleagues. Group-working cf programming as a solitary activity! (I had puzzled over the observation that this old 'stick-in-the-mud'* often comes-out with ideas 'from the past' which are components of OOP, eg "re-use"; rather than the OOP-natives already applying them as SOP)

* one of the many, vaguely-insulting, comments sometimes levelled in my direction* - especially when I pose such questions... They are non-PC but are not taken to be particularly hurtful - part of a humor-filled, rough-and-tumble team. Besides, one of 'them' taking the mickey, simply invites me to riposte with a comment about 'those of us who have time to have grown-up'. Much raucous laughter, rather than serious intent!


Yet, here on the list, interest was shown in 'being organised' (even if few have actually weighed-in)...
Get your reused code organsed - it is a huge win. At least make a little library of modules in its own namespace (so you can just import it without conflict). What automatic/published/distribution processes you follow after that is your choice. But having that library? Invaluable.

Agreed. Thank you for adding motivation!

--
Regards =dn
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to