1) Somebody suggests using e.g. Ansible to update static tables; but that requires restarting or triggering a reload in some fashion, does it not?

2) Plus there is the matter of querying your inventory to see what the configurations are on all nodes (what if they're intentionally not all the same?).

Key value stores (Redis, etcd, I've used DNS, there are others) are an option which address both issues, if network connected. (If a file can be updated over the network, is the filesystem really "not connected to the network"?)

I would still like to understand what the guiding philosophy for network connected maps is other than "bad because security", and what the real attack surface/security concerns are. For this application and particular implementation, not the planet... maybe for particular choices of implementation tools, for instance I know that Postfix relies on some "primitives" to perform "risky" operations and that's easily cast as defensive coding based on defensive design (against the implementation language).

I may choose to disable certain security checks and recompile, but I'd like to know what application security assumptions I'm violating with more specificity than "trying to connect to localhost": what attack from the application's vantage accrues to localhost which does not accrue to a unix domain socket for example? I'm willing to get theoretical enough to ask: are all of these assumptions still valid in an immutable and containerized
world (what role does partitioning of roles and privileges play)?

I'm waiting patiently for that discussion.

(If there's another way, even hackerish, to call a networked service to validate aliases at SMTP handshake which doesn't require the equally hackerish measure of disabling security checks and recompiling then I'd love to hear about it.)

--

Fred Morris

Reply via email to