> Are you talking about building Docker containers on the fly? 

I’m a bit baffled what gave you that idea after I’ve spent days arguing for 
strict build/runtime separation?

> We use Docker extensively, but our build machine makes images that we push to 
> Dockerhub (private repos).  This has a lot of advantages:
> Our images (on the hub) are effectively pinned at the version they were built
> Our test and production servers (can, if we want) always get exactly the same 
> image (even if we need to rebuild a server months later)
> We test all our servers so we only have to manually pin packages (python or 
> apt) if we run into regressions or other incompatibilities (i.e. an upgraded 
> package that is no longer compatible with a manually pinned package)
> Our build machine caches all the intermediate images (i.e. after each docker 
> step).  We intentionally sequence our images to place oft-changing items at 
> the end.  
> Unless I change the list of apt packages, that layer is never rebuilt.
> We have an extra step that uploads *just* the requirements files before pip 
> installing
> Our last step is the app code so changes to this layer are just a cached 
> layer + PUT (i.e. seconds)
> This optimization also makes our containers super efficient to upgrade 
> because we only download the changed layers
> This sounds like it covers a lot of the PEX advantages plus the added 
> benefits of containerization.

I don’t see anything that contradicts anything that I (or glyph) have written.  
At this point we were merely discussing what kind of isolated build artifact 
goes into the container/deb: a pex (= single file) or a vanilla venv (= 
directory structure).

_______________________________________________
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python

Reply via email to