Rich Freeman <ri...@gentoo.org> writes:

> On Thu, Jan 15, 2015 at 3:32 PM, lee <l...@yagibdah.de> wrote:
>> Rich Freeman <ri...@gentoo.org> writes:
>>
>>> 2. Run fail2ban in each container and have it monitor its own logs,
>>> and then add host iptables rules to block connections.
>>
>> Containers must not be able to change the firewalling rules of the host.
>> If they can do such things, what's the point of having containers?
>
> A "container" on linux is really a set of kernel namespaces.  There
> are six different namespaces in linux and a process can share any or
> none of them with the host.
>
> In this case the network namespace determines whether a process can
> see the host interfaces.  There may also be capabilities that control
> what the process can do with those interfaces (I'd have to read up on
> that).  A container may or may not have a separate network namespace.
> If it does most likely you're going to have to set up a bridged
> interface, DHCP/NAT, etc for the container.

That's what I did.  Until I learn more, I have to assume that the
default settings are reasonably secure ...

>>> 3. Run fail2ban in each container and have each container in its own
>>> network namespace.  Fail2ban can then add container iptables rules to
>>> block connections.
>>
>> That would waste resources.
>
> Depends on how you run it, but yes, you might have multiple instances
> of fail2ban running this way consuming additional RAM.  If you were
> really clever with your container setup they could share the same
> binary and shared libraries, which means they'd share the same RAM.
> However, it seems like nobody bothers running containers this way
> (obviously way more work coordinating them).

And they wouldn't be much separated anymore.

> I doubt it would take more CPU - 1 process scanning 5 logs probably
> doesn't use more CPU than 5 processes scanning 1 log each.

Isn't there some sort of scheduling and/or other overhead involved when
you run more processes?  I mean the overhead of "just being there":  A
process scheduler that needs to consider 500 processes might require
more CPU itself than a scheduler considering 150 processes.

> You would get a security benefit from just running fail2ban on the
> host, since a failure on one container would apply a block to all the
> others.

Plus when running fail2ban on the host, you can block connections from
a particular IP for everyone.


However, fail2ban didn't seem to do anything.  It wrote into its
logfile:


,----
| INFO    Changed logging target to /var/log/fail2ban.log for Fail2ban v0.9.1
| ERROR   Unable to import fail2ban database module as sqlite is not available.
`----


That was all for a couple days until I stopped it.  If some squlite
library or something is required, I would expect it to have been pulled
in through dependencies.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.

Reply via email to