Hi Carlos and Everyone, 

In my work, I have to manage (now) 6 production instances of GNU Health
and 4 demo/test instances. All this is happening while we're still doing
development on new modules and doing maintenance and bug-fixes on the
stuff that's in production. The best part is, we have an EXTREMELY small
team handling the technical details (installation, backup, upgrade etc).

While all this is happening, we have other stakeholders making plans to
roll out new instances in other locations and those new instances need
to be managed and updated like all the others.

This is how we do it:

      * we use virtualenv
      * for each major version of gnuhealth or our own modules, we use a
        different virtualenv
      * they are all located in /opt/health/envX.Y  where X.Y is our
        version number. Internally we have a mapping of our version to
        GNU Health version. e.g. our version 1.2 depends on modules from
        GNU Health 2.8. Our 1.0 was GNU Health 2.6 etc
      * create a gnuhealth user with home directory /var/local/gnuhealth
      * the virtualenv in /opt/... is owned by root or by another user
        (e.g. healthadm) - as an additional security measure
      * everything is installed and updated using pip - even our
        internal private packages that are not on PyPI
      * We have an internal python package server[1] that automatically
        mirrors stuff from pypi.python.org.
      * For packages that we upload, it does not try to mirror those
      * GNU Health, we download as source and upload it to our internal
        package server. This way, we control the version that is
        installed to our servers
      * For each type of instance, we have a module that depends on the
        required modules. e.g. moh_health_centre_profile,
        moh_hospital_profile.
      * Even without updating the moh_xyz_profile module, we can still
        call pip install -U trytond_moh_xyz_profile and it will install
        updates for all dependencies where they exist.


Backups are done using rsync and barman[2].

So, while I've never used Vagrant, I can understand your desire to have
"infrastructure as code". What I have laid out above, if you follow our
use of pip, you should be able to reliably control updates to specific
modules from within your vagrant scripts.

The reasoning behind using different virtualenv for different version,
is to allow for an easy rollback if the upgrade fails at the database
upgrade stage. If that happens, we restore the database that was
backed-up before upgrade and just start trytond from the old virtualenv
while we go off to figure out the issue on a non-production machine.
This way, we have had very little down-time.

I hope this helps. 

[1] Our internal PyPI server is an installation of localshop -
https://pypi.python.org/pypi/localshop 
[2] Barman is an awesome tool for postgresql backups -
http://www.pgbarman.org/ 

---
MM

On Tue, 2016-04-05 at 15:17 -0500, Carlos Eduardo Sotelo Pinto wrote:
> Hi
> 
> 
> I am looking something liek that, I was to having the whole deployment
> [ install / upgrade / upgrade ] on code using a bos like vagrant or
> docker in order on havin continue sintegration and continues delivery
> 
> 
> I have an script for tryton working on that using vagrant, and any
> change done is no possible to keep it as part of the process without
> edit the deployment scripts
> 
> 
> Best regards
> 
> 
> 
> 2016-04-05 14:27 GMT-05:00 Axel Braun <axel.br...@gmx.de>:
> 
>         Hi,
>         
>         Am Dienstag, 5. April 2016, 13:59:56 schrieb Carlos Eduardo
>         Sotelo Pinto:
>         >
>         > Yes, you are right, however it is far from having
>         "Infraestructure as
>         > code", what I mean is having health on a way on working as
>         Continues
>         > Delivery and Continues Integration, considering that
>         implementing ERP
>         > requires more than just install
>         
>         Definitely, continuous adaption is part of this.
>         
>         I used to run a productive Tryton environment, and my target
>         was always to
>         have as least as possible administrative effort.
>         So for the Tryon system we used packages from OBS
>         (build.opensuse.org)
>         
>         The enhancements were implemented as separate package, build
>         as well on OBS,
>         but without disclosing it to public.
>         
>         Both are installed with system package management. Updates on
>         base package
>         automatically trigger a rebuild of depending packages, so the
>         whole
>         environment is always up to date, with minimal effort.
>         
>         Just as additional thought
>         Axel
>         
> 
> 
> 
> 
> -- 
> 
> Carlos Eduardo Sotelo Pinto
>     Senior Software Analyst Developer
>     Claro RPC +51983265994 | MOV RPM( # ) +51 966110066
>     GTalk: carlos.sotelo.pi...@gmail.com | Skype: csotelop
>     GNULinux RU #379182 | GNULinux RM #277661
> 
> 
> No availability between 08:30 and 18:00, I will answer as soon as
> posible
> 
> 
> Please consider the environment before printing this email
> Join the campaign at http://thinkBeforePrinting.org
> 

Reply via email to