Hi there,

* David Schwartz ([EMAIL PROTECTED]) wrote:
> 
>       Let's go back to how we got into this. The position I was
>       refuting was that this is a fundamental problem that can't be
>       solved at the application level.  But this is utterly false --
>       there are any number of ways, at the application level, that
>       resistance to this type of denial of service attack could be
>       provided.

I've been making this point about Apache for some time - it needs to
scale better than one request-per-<X> for X=process, thread, etc. It
seems that the only way to get any "asynchronous" processing is to use
user-threading to manufacturing the asynchronous behaviour explicitly,
as Apache itself does not appear to have this capability. Eg. Gnu Pth
perhaps.

However, w.r.t. slapper and what-not, there is *no* theoretically
acceptable approach to tackle DDoS attacks even if you could rewrite the
web-server from scratch (or put something more scalable in front of it,
like squid). HTTPS in this respect is no more or less immune than HTTP,
though it is unfortunately slightly worse from a quantitative point of
view. Ie. a design oversight that has persisted in SSL/TLS from the
outset is that the server is the first side to have to do any
computationally-expensive crypto operations - this enhances a DDoS
client's ability to force the server into heavy work without having to
do much itself. However, even were that reversed - it just bumps up the
limit at which your server will start to fall over, it won't eliminate
it completely. The higher your server's DoS resistence is, the higher
the number of exploited machines the DDoS will require, but sooner or
later it'll probably exploit enough machines. In theory you have to
*assume* it will find enough machines and so the neither application
logic nor the protocol can be expected to provide robustness against
DDoS. The frustration in this case is that the server spends most of
the DDoS attack sleeping rather than working too hard! :-)

There must be people out there providing analytic routing logic tools of
some form? Anyone know of anything recommendable? Ie. something to
identify DoS source addresses on-the-fly and start blocking/unblocking
them according to some statistical rules. This should at least adapt
itself to DDoS attacks enough to put the "breaking" point back on your
network capacity rather than your web-server's parallelism? (And in
Apache, that's the target you need to protect most). Perhaps there's
some "contrib" thing with Apache to hook this logic using its logs??

But before this gets way off-topic for the list ... are we agreed then
that all this discussion *is* about network I/O timouts in Apache and
*not* about any SSL/TLS vulnerabilities in OpenSSL?? If not, someone say
so please.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.openssl.org/

______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to