Ah, I forgot.

The only drawback here is the 5% permanent idle load per image.
Every image I add adds a 5% load to the server, even when it's only standby.

That's a totally bummer, though manageable if you have less than 10
images like I do :)





Esteban A. Maringolo


2014-10-24 16:29 GMT-03:00 Esteban A. Maringolo <emaring...@gmail.com>:
>>> I'd like to know the development process of others, from SCM to
>>> building, deploying and server provisioning.
>>
>> I would say the standard approach is:
>>
>> - use Monticello with any repo type
>> - split your code in some big modules, some private, some from public source
>> - have a single overarching Metacello configuration
>> - use zeroconf to build images
>> - optionally use a CI
>>
>> Note that zeroconf handlers allow you to build images incrementally (the 
>> image is saved after each build), which is way faster than always starting 
>> from scratch.
>
> I'm good then, I'm doing everything you listed except CI.
>
>>> After a year o Pharo development I think I'm ready to embrace a CI
>>> server (I already use scripts to build images), but I think I will
>>> move all my repositories to git before.
>>
>> These are orthogonal decisions, most CI jobs on the Pharo contribution 
>> server run against StHub.
>
> I want git because I want to use BitBucket to store my code :)
>
>>> However, my remote server provisioning is still manual, and too
>>> rudimentary even for my own taste. If I could speed up this, I would
>>> deliver features faster to my customers. Now everything runs inside a
>>> two week sprint window.
>
>> I am not into provisioning myself, but more automation is always good, 
>> though sometimes setting up and maintaining all these things takes a lot of 
>> time as well.
>
> It does take time, and "in theory" it works. I have this printed in my
> desktop: http://xkcd.com/1319/ :)
>
>
> Wrapping up, I implemented your no-commandline handler solution.
>
> I added a few servers to the upstreams of my site nginx configuration like 
> this:
>
> upstream seaside {
>   ip_hash;
>   server 127.0.0.1:8080;
>   server 127.0.0.1:8081;
>   server 127.0.0.1:8082;
>   server 127.0.0.1:8083;
> }
>
> upstream apimobile {
>   server 127.0.0.1:8180;
>   server 127.0.0.1:8181;
> }
>
>
> And added the following to my following supervisord.conf [1]
>
> [program:app_webworker]
> command=/home/trentosur/app/pharo app.image webworker.st 818%(process_num)1d
> process_name=%(program_name)s_%(process_num)02d ; process_name expr
> (default %(program_name)s)
> numprocs=4
> directory=/home/trentosur/app
> autostart=true
> autorestart=true
> user=trentosur
> stopasgroup=true
> killasgroup=true
>
> [program:app_apiworker]
> command=/home/trentosur/app/pharo app.image apiworker.st 918%(process_num)1d
> process_name=%(program_name)s_%(process_num)02d ; process_name expr
> (default %(program_name)s)
> numprocs=2
> directory=/home/trentosur/app
> autostart=true
> autorestart=true
> user=trentosur
> stopasgroup=true
> killasgroup=true
>
>
> Then this will spawn a [numproc] number of monitored processes, using
> the process number (process_num, in terms of pool) as a parameter to
> the startup script.
> The good thing here is, IMO, I can add more workers without having to
> modify anything on the Pharo side, it is... I can delegate this to
> regular sysadmin :)
>
>
> Best regards,
>
>
> Esteban A. Maringolo
>
> [1] http://supervisord.org/

Reply via email to