Christopher Schultz wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Jesse,

On 1/13/15 6:29 PM, Jesse Barnum wrote:
I need the ability to examine the POST data from a request,
examine it, and either respond to it or close the connection
without returning any result, not even a 200 OK status.

The reason for this is because I’m getting overwhelmed with
thousands of invalid requests per second, which are racking up
bandwidth fees. The requests can’t be traced to an IP address, so I
can’t just block them in a firewall or Apache - I need to actually
use logic in my Tomcat app to figure out which requests to respond
to.

Is there a way to force Tomcat to just drop the connection and
close the socket without sending a response?

You can't close the stream form your code, Tomcat will ignore it, so a
response flush, and return a 200 response anyway.

I'm curious, what's wrong with an empty 200 response? It's only a
couple of bytes, but I suppose if you are getting millions per hous,
you could still incur bandwidth costs...

You might be able to do this with a Valve, but then you might have
problems with your web application needing to provide the logic to
determine whether or not to accept the request.

When you say "can't be traced to an IP address" do you mean that you
are seeing invalid requests coming from all over the place, or that
the requests don't include a source IP address (which seems fishy)?

A few options that might achieve your goal without using the technique
you describe:

1. Use client authentication; unauthorized clients can't even handshake
   Downsides: SSL overhead

2. Use a VPN (which essentially uses client authentication)
   Downsides: VPNs really, really suck

3. (As Mark E suggests) Use mod_security with httpd
   I know this will seriously separate your business logic form your
web application, but perhaps there is a simple set of criteria that
might eliminate a significant portion of the requests, thus solving
the problem "well enough"


I have an additional suggestion, harking back to a time when I was trying to convince the Apache httpd devs to implement something like this in .. Apache httpd. By experience with trying to convince people, I know that this is controversial, so hold on tightly and follow the gist. You can always decide by yourself if this is appropriate for your case.

The idea is this : when you get such a request, and you decide that it is invalid, return a 404 "not found", but delay it by some random number of seconds.
(do a random sleep in your webapp, or in a filter after the webapp).

The rationale of this is as follows : most such requests - if not all - come from automated nefarious agents, which try to break into your server (and others) by finding some weakness. They work on the base of a list of IP's (or hostnames) and use potentially many infected hosts to issue such requests. When one of these agents does find a "hit" (a URL which actually looks weak), it phones it back to Mamma, which can then mount a more serious attack. The strategy works, because each missed request (to a well-protected server) usually only takes a few milliseconds to send and get a response, so these agents can do thousands of them in a reasonable timeframe.

But..
If it was so that each target took a long but unpredictable time to respond (only for those bad requests), by a legitimate but slow response, then these agents would have a major problem, because given the same attacking resources, they could only achieve a small fraction of the probes that they intend to do. And, this should not bother legitimate accesses by legitimate applications very much, because these legitimate applications send overwhelmingly "good" requests (so they get a normal answer time).

The only serious drawback (at least in my view), is that you have to provision on your server a sufficient amount of response threads, to do this "sleeping" for bad requests, while continuing to serve legitimate ones. But it might be possible to design a clever scheme by which such sleeping webapp instances would consume a minimal amount of resources while doing so. (Mabe such as internally dispatching the bad requests to some special webapp which handles this at low cost).

The long-term hope of course would be that if a sufficient number of webservers on the Internet came to adopt similar practices, this type of robot scanning may become so impractical for their originators, that it would be abandoned. The short-term hope is that if attackers notic that your website is such a "sink", they may just take it out of their list of targets, so as not to slow down their whole nefarious scheme.





---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to