On Tue, 10 Jan 2017, Mohammed Naser wrote:

We use virtual hosts, haproxy runs on our VIP at port 80 and port
443 (SSL) (with keepalived to make sure it’s always running) and
we use `use_backend` to send to the appropriate backend, more
information here:

http://blog.haproxy.com/2015/01/26/web-application-name-to-backend-mapping-in-haproxy/

Thanks for writing about this, the way you're doing things is
deliciously sane. When this discussion initially came up I was
surprised to hear that people were deploying with any correspondence
between what they had in the service catalog and the explicit
(internal) hosts and (internal) ports on which they were deploying
the services. Your model is what I've been assuming people would
(and actually) do:

* host the WSGI applications somewhere (anywhere)
* have front end proxies / load balancers/ HA services dispatching
  to those backends based on either host name or a prefix on the URL

This means that what shows up for the configured listening host and
port in somewhere like puppet-placement's actual installation of the
service is very likely completely different from what shows up in
whatever is writing the service catalog.

It makes our catalog nice and neat, we have a
<service>-<region>.vexxhost.net <http://vexxhost.net/> internal
naming convention, so our catalog looks nice and clean and the API
calls don’t get blocked by firewalls (the strange ports might be
blocked on some customer-side firewalls).

[catalog snipped]

I’d be more than happy to give my comments, but I think this is
the best way.  Prefixes can work too and would make things easy
during dev, but in a production deployment, I would rather not
deal with something like that.  Also, all of those are CNAME
records pointing to api-<region>.vexxhost.net
<http://vexxhost.net/> so it makes it easy to move things over if
needed.  I guess the only problem is DNS setup overhead

The reason for starting to use prefixes in devstack has been because
it is easy to manage when there's just the one running apache and
modifying the /etc/hosts table was not considered. Since this topic
came up there's been discussion of adding hosts (for each service)
to /etc/hosts as a way of allowing different virtual hosts for each
service, all on the same port. This allows for the desired
cleanliness and preserves different log files for each service (when
using prefixes, it is harder to manage the error logs).

These concerns that are present in devstack don't apply in "real"
installations where having a reverse proxy of some kind is the norm.

So to bring this back round to puppet and ports: Should puppet be
expressing a default port at all? It really depends on whether the
intention is to allow multiple services to run in the same server on
the same host, how logging is being managed, whether apache is being
used, etc.

Should each service have a prescribed default port to avoid
collisions? I think not. I think the ports that the services run on,
as exposed to the users, should always be 80 and 443 (so no need to
define a port, just a scheme) and the internal ports, if necessary,
should be up to the deployer and their own internal plans. If we
define a default port, people will use it and expose it to users.

imho, iana(deployer), ymmv, etc

--
Chris Dent                 ¯\_(ツ)_/¯           https://anticdent.org/
freenode: cdent                                         tw: @anticdent
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to