> On Feb 25, 2017, at 9:03 AM, Clayton Daley <clayton.da...@gmail.com> wrote: > > Are you talking about building Docker containers on the fly?
Pretty sure Hynek has proper build-server/production separation. > We use Docker extensively, but our build machine makes images that we push to > Dockerhub (private repos). This has a lot of advantages: > Our images (on the hub) are effectively pinned at the version they were built > Our test and production servers (can, if we want) always get exactly the same > image (even if we need to rebuild a server months later) > We test all our servers so we only have to manually pin packages (python or > apt) if we run into regressions or other incompatibilities (i.e. an upgraded > package that is no longer compatible with a manually pinned package) > Our build machine caches all the intermediate images (i.e. after each docker > step). We intentionally sequence our images to place oft-changing items at > the end. > Unless I change the list of apt packages, that layer is never rebuilt. > We have an extra step that uploads *just* the requirements files before pip > installing > Our last step is the app code so changes to this layer are just a cached > layer + PUT (i.e. seconds) > This optimization also makes our containers super efficient to upgrade > because we only download the changed layers > This sounds like it covers a lot of the PEX advantages plus the added > benefits of containerization. Pex and containerization are completely orthogonal. You can use pex inside or outside of a container, and you can use a container with or without pex. Hynek lays out a good case against pex inside containers, but the blog post from Moshe that I linked to earlier lays out a good reason to use them together. -glyph
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python