On 28.11.2016 22:04, Jaaz Portal wrote:
hi Andre,
you are wrong. This vulnerability is not only causing memory leaks, it
makes also apache workers to hang

Maybe for the last time here :

- what do you call "apache workers" ?

, making it easy to exhaust the pool.

- what do you call "the pool" ?

what i have in my log files. But it is true also that such exhaustion can
be made by other forms of dos attacks described in this thread.

regarding you suggestion on our application, it does not dos bind server
nether does not scan for various vulnerabilities in apache, what i have
also in the logs

For your information : I run about 25 Internet-facing Apache webservers (some with a back-end tomcat, some not). On every single one of those webservers, there are *hundreds* of such "scans" every day, as shown by the Apache access logs. That is just a fact of life on the Internet. They are annoying, but most of them are harmless (from an "attack" point of view), because they are scanning for things that we do not run (phpmyadmin, xmlrpc, vti_bin, etc., etc., the list is almost endless), and thus are responded to by Apache as "404 Not found".
What is annoying with those scans, is
a) that they fill the logfile, and thus make it more difficult to find really significant things b) that each of those requires some bandwidth and system resource, if only to return a "404 Not found" (or a "401 Unauthorised"), and that we pay for that bandwidth and resources.

If I could find a way to charge 0.1 cent per access to my servers, from the people who wrote or run the programs who are doing this, I could retire in luxury.

But they are not a real problem, because they are caught as "invalid" by Apache, and rejected quickly, so they cannot do anything really nasty (except if they were sending several thousand such requests per second to one of my servers for a long time).

The ones that are worrying, are the ones
- a) which do /not/ end up as a "404 Not found", because they have found an application which responds, and they are not coming from our legitimate customers - b) /the ones which we do not see/, because they either do not send a valid HTTP request, or they have found a way to trigger one of our applications, in such a way that the application misbehaves and, perhaps, even if they do not crash our servers, they may provide the attacker with some entry point to do other things which we do not know and do not control

What I am trying to say here, is /do not jump to premature conclusions/.
Such "scans" as you mention, happen to everyone, all the time, from ever-changing IP addresses located all over the world. Some of those "scans" may come from the infected PC of your grandmother, and she does not even know about it.

There is no guarantee, and no indication or proof so far, that /these/ scans are even related to "the other thing" which happens on your webserver, which looked much more focused.

So do not just bundle them together as being the same thing, until you have some real data that shows for example that these different things all come from the same IP addresses.

And one more thing, also finally until you come back with some real data : I am not saying that your application "scans your server". What I am saying is that, maybe, by chance or by design, the attackers have found a URL which goes to your application, and which causes your application to keep tomcat and/or Apache busy for a long time. And that maybe /that/ is the problem you should be looking for, and not some hypothetical bug in Apache httpd or tomcat.



kindly regards,
artur

2016-11-28 21:33 GMT+01:00 André Warnier (tomcat) <a...@ice-sa.com>:

On 28.11.2016 20:34, Jaaz Portal wrote:

hi mark,
yes, i understand now what slowloris attack is.
maybe it was this maybe *this one based on * * mod_proxy denial of
service *
CVE-2014-0117 <http://cve.mitre.org/cgi-bin/
cvename.cgi?name=CVE-2014-0117>


You keep on saying this, but the description of that vulnerability of
*Apache httpd*, and the symptoms which you described, *do not match*.
You described the problem as ocurring in Apache tomcat, which in your case
is sitting as a back-end, *behind* Apache httpd. And restarting tomcat
cured the problem.

The CVE above applies to Apache httpd, and describes how an attacker could
attack Apache httpd and cause *its children* processes to crash (the
children processes of Apache httpd), by leading them to consume a lot of
memory and crash with an out-of-memory error.
Granted, the problem occurred in the mod_proxy module of Apache httpd; but
it was httpd which crashed, not tomcat.
And tomcat processes are not "Apache httpd children processes" in any
understanding of the term.

As far as I remember, you never mentioned Apache httpd crashing. You
mentioned "the pool" getting full or satured or something like that,
without ever describing properly what you meant by "the pool".

As far as I am concerned, according to all the relatively unspecific
information which you have previously provided :
1) the attack - if that is what it is/was - is definitely NOT related to
the CVE which you have repeatedly mentioned
2) it is apparently not a "classical" DoS or "slowloris DoS" directed at
your front-end Apache. Instead, it seems that the requests are properly
received by Apache, properly decoded by Apache, and then whatever Apache
proxy module you are using (mod_proxy_http, mod_proxy_ajp or mod_jk) is
properly forwarding these requests to a back-end tomcat; and it is at the
level of that back-end tomcat that the requests never seem to end, and in
the end paralyse your tomcat server (and later on maybe your Apache httpd
server too, because it is also waiting for tomcat to respond).

So your very way of describing the problem, in terms of "first we used
this proxy module, and then they exploited the vulnerability so and so;
then we changed the proxy module, and they exploited that too; etc.."
seems to not have anything to do with the problem per se, and (I believe)
confuses everyone, including yourself.

It is not that "they" exploited different vulnerabilities of various httpd
proxy modules, one after the other. Each of these proxy modules was doing
its job properly, and forwarding valid HTTP requests properly to tomcat.
When you changed from one proxy module to another, you did not really
change anything in that respect, because any proxy module would do the same.

But in all cases, what did not change, was the tomcat behind the
front-end, and the application running on that tomcat.  So the presumed
attackers did not have to change anything, they just kept on sending the
same requests, because they were really targeting your back-end tomcat or
the tomcat application in it, no matter /how/ you were forwarding requests
from Apache httpd to tomcat.

So either it is tomcat itself, which has a problem with some request URLs
which do not bother Apache httpd (possible, but statistically unlikely), or
it is the application which runs in tomcat that has such a problem
(statistically, much more likely).

we do not know yet

we have setup more logging and are waiting for them to attack once again


Yes, that is the right thing to do.  Before deciding what the problem may
be, and what you can do about it, the first thing you need is *data*.  You
need to know
- which request URL(s?) cause that problem
- which IPs these requests come from (always the same ? multiple IPs that
change all the time ? how many ? can these IPs be valid/expected clients or
not ? do these IPs look like some "coordinated group" ?)
- how many such requests there may be during some period of time (10, 100,
1000, more ?)
- if these URLs result in passing the request to tomcat
- what tomcat application (if any) they are directed to
- if so, when that application receives such a request, what is it
supposed to do ? does it do it properly ? how long does it need, to respond
to such a request ?

You also need to ask yourself a question : is the application which you
run inside tomcat something that you designed yourself (and which hackers
are unlikely to know well-enough to find such a URL which paralyses your
server) ? or is it some well-known third-party java application which you
are running (and for which would-be attackers would be much more likely to
know exactly such a bug) ?


anyway, thank you for all informations, it was very useful and educational
reading for all of us

best wishes,
artur

2016-11-28 19:46 GMT+01:00 Mark Eggers <its_toas...@yahoo.com.invalid>:

Jaaz,

On 11/27/2016 2:46 PM, André Warnier (tomcat) wrote:

On 27.11.2016 19:03, Jaaz Portal wrote:

2016-11-27 18:30 GMT+01:00 André Warnier (tomcat) <a...@ice-sa.com>:

On 27.11.2016 14:26, Jaaz Portal wrote:

hi,
everything i know so far is just this single log line that appeared
in
apache error.log

[Fri Nov 25 13:08:00.647835 2016] [mpm_event:error] [pid 13385:tid
1397934896385
92] AH00484: server reached MaxRequestWorkers setting, consider

raising

the
MaxR
equestWorkers setting

there was nothing else, just this strange line


This is not a "strange" line. It is telling you something.
One problem is that you seem convinced in advance, without serious
proof,
that there is a "bug" or a vulnerability in httpd or tomcat.
Read the explanation of the httpd parameter, here :
http://httpd.apache.org/docs/2.4/mod/mpm_common.html#maxrequ
estworkers
and try to understand it.


I understand it very well. We are serving no more that 500 clients per
day
so there was no other option that some kind of attack.


About the "bug" or "vulnerability" : a webserver is supposed to serve
HTTP

requests from clients.  That is what you expect of it. It has no
choice but
to accept any client connection and request, up to the maximum it can
handle considering its configuration and the capacity of the system on
which it runs. That is not a bug, it is a feature.


We have some weeks ago come under attack from some Polish group. It
was
first bind that was DoS'ed, we was running on stable Debian so i
updated
bind to latest version. It did not helped. They has dos'ed it so we
switched to other dns provider. That has helped.

Then they exploited some well know vulnerability in mod_proxy. We have
updated apache to the latest but again they has exploited it, so we
have
switched to mod_jk. And then guess what. They exploited it too so i
decided
to write to this list looking for help before trying jetty.



The normal Apache httpd access log, will log a request only when it is
finished.  If the request never finishes, it will not get logged.
That may be why you do not see these requests in the log.
But have a look at this Apache httpd module :
http://httpd.apache.org/docs/2.4/mod/mod_log_forensic.html


ok, thanks, will take care

Note : that is also why I was telling you to enable the mod_jk log,
and to

examine it.
Because mod_jk will also log information before the request produces a
response.


and server hanged.

Again, /what/ is "hanged" ? Apache httpd, or tomcat ?


Apache was accepting connection but not processed it. After restart of
tomcat server it worked again.


Also in

access logs there are no clues that it was under any heavy load.

after around hour after discovering that our server hanged-out we
have
restarted tomcat server
and it worked again


Yes, because that will close all connections between Apache httpd and
tomcat, and abort all requests that are in the process of being
processed
by tomcat. So mod_jk will get an error from tomcat, and will report an
error to httpd, and httpd will communicate that error to the clients,
and
close their connection.
It still does not tell you what the problem was.
The only thing that it suggests, is that the "bad" requests actually
make
it all the way to tomcat.


correct

i will enable logs that you has pointed out and we will look what i
will
catch
however i think we have only one chance, as if the solution we has
found
out (connection_timeout + mod_bn)
will work they will stop exploiting it

thank you very much for all the help and explanations
i will report to the list new facts once they will attack us again

best regards,
artur


Ok, but also read this e.g. :
https://www.corero.com/blog/695-going-after-the-people-

behind-ddos-attacks.html


Attempts to bring down a site by DoS attacks is a crime, in most
places..

You can report it, while at the same time trying to defend yourself
against it.

It is also relatively easy, and quite inexpensive in terms of system
resources, to run a small shell script which takes a list every few
seconds of the connections to the port of your webserver, and which IPs
they are coming *from*.
E.g.
First try the netstat command, to see what it lists, like :
# netstat -n --tcp | more

Then you will want to filter this a bit, to only consider established
connections to your webserver, for example :
# netstat -n --tcp | grep ":80" | grep "ESTABLISHED"

Then you will want to send this to a logfile, regularly, like this :

# date >> some_file.log
# netstat -n --tcp | grep ":80" | grep "ESTABLISHED" >> some_file.log
(repeat every 3 seconds)

This will not generate GB of logfiles, and it will tell you, when the
problem happens, how many connections there are exactly to your
webserver, and where they are coming from.
Then later you can further analyse this information..



i think that setting connection-timeout and limiting the number of
clients
by mod_bd i will
have effect that next time somebody will try this exploit it will

block

his
access to the site
for minute or two, pretty good holistic solution i would say

still, it seems that there is serious vulnerability somewhere in
apache,
mod_jk or tomcat
i would like to help find it out but need some hints which debug
options
enable to catch the bad guys
when they will try next time

best regards,
artur

2016-11-27 13:58 GMT+01:00 André Warnier (tomcat) <a...@ice-sa.com>:

On 27.11.2016 13:23, Jaaz Portal wrote:


hi Andre,

thank you very much this was very educative but in my case it is
little
bit
different.
The server is no flooded, there is maybe dozen of very
sophisticated
connections that somehow hangs apache workers threads


Can you be a bit more specific ?
When you say "apache workers threads", do you mean threads in Apache
httpd, or threads in Apache Tomcat ? (both are Apache webservers,
so it
is
difficult to tell what you are talking about, unless you are very
precise).

Let me give you some additional explanations, and maybe you can

figure

out
exactly where it "hangs".

    From the Apache httpd front-end point of view, mod_jk (the
connector to
Apache Tomcat) is basically one among other "response generators".
Apache
httpd does not "know" that behind mod_jk, there is one or more
Tomcat
servers.  Apache httpd receives the original client request, and
depending
on the URL of the request, it will pass it to mod_jk or to another
response
generator, to generate the response to the request.
That mod_jk in the background is using a Tomcat server to actually
generate the response, is none of the business of Apache httpd, and

it

does
not care. All it cares about, is to actually receive the response

from

mod_jk, and pass it back to the client.

If httpd passes a request to mod_jk, it is because you have
specified in
the Apache configuration, the type of URL that it should pass to
mod_jk..
That happens via your "JkMount (URL pattern)" directives in Apache
httpd.

Of course Apache httpd will not pass a request to mod_jk, before it
has
received at least the URL of the request, and analysed it to

determine

*if*
it should pass it to mod_jk (*).

If the mod_jk logging is enabled, you can see in it, exactly *which*
requests are passed to mod_jk and to Tomcat.
Do you know *which* requests, from which clients, cause this "thread
hanging" symptom ?
Once you would know this, maybe you can design a strategy to block
specifically these requests.

and the effect is permanent. Quickly the pool is exhausted



which pool exactly ?

and the only

solution that works is to restart tomcat.


I think it is a bug similar to this one from mod_proxy:
https://tools.cisco.com/security/center/viewAlert.x?alertId=34971


Maybe, maybe not. As long as we do not know what the requests are,
that

block things, we do not know this.


I think also that your solution with setting connectionTimeout will
solve

the problem, at least partially. THANK YOU.


Same thing, we do not know this yet.  It was only giving this
explanation,
to help you think about where the problem may be.


I would like to help you further investigate this issue, as our

server

comes under such attack once or twice in a week.


Other than giving you hints, there is not much I or anyone else
can do

to
help. You are the one with control of your servers and logfiles, so
you
have to investigate and provide more precise information.


(*) actually, to be precise, Apache httpd passes *all* requests to
mod_jk,
to ask it "if it wants that request". mod_jk then accepts or

declines,

depending on the JkMount instructions. If mod_jk declines, then

Apache

httpd will ask the next response generator if it wants this request,
etc...






best regards,

artur

2016-11-27 12:46 GMT+01:00 André Warnier (tomcat) <a...@ice-sa.com>:

Hi.


Have a look that the indicated parameters in the two pages below..
You may be the target of such a variant of DDoS attack : many
clients
open
a TCP connection to your server (front-end), but then never sends
a
HTTP
request on that connection.  In the meantime, the server accepts

the

TCP
connection, and passes it on to a "child" process or thread for
processing.  The child then waits for the HTTP request line to
arrive
on
the connection (during a certain time), but it never arrives.
After a
while, this triggers a timeout (see below), but the standard
value of
that
timeout may be such that in the meantime, a lot of other

connections

have
been established by other such nefarious clients, so a lot of
resources
of
the webserver are tied up, waiting for something that will never
come..
Since there is never any real request sent on the connection, you
would
(probably) not see this in the logs either.

The above is the basic mechanism of such an attack.  There may be
variations, such as the client not "not sending" a request line,

but

sending it extremely slowly, thus achieving perhaps similar kinds

of

effects.

As someone pointed out, it is quite difficult to do something
about
this
at the level of the webserver itself, because by the time you
would do
something about it, the resources have already been consumed and
your
server is probably already overloaded.
There are specialised front-end devices and software available, to
detect
and protect against this kind of attack.

You may want to have a look at the following parameters, but make
sure
to
read the caveats (side-effects, interlocking timeouts etc.),
otherwise
you
may do more harm than good.

Another thing : the settings below are for Apache Tomcat, which
in your
case is the back-end. It would of course be much better to detect
and
eliminate this at the front-end, or even before.  I had a look at
the
Apache httpd documentation, and could not find a corresponding
parameter.
But I am sure that it must exist. You may want to post this same
question
on the Apache httpd user's list for a better response.

Tomcat configuration settings :

AJP Connector : (http://tomcat.apache.org/tomc
at-8.5-doc/config/ajp.html#
Standard_Implementations)

connectionTimeout

The number of milliseconds this Connector will wait, after
accepting a
connection, for the request URI line to be presented. The default
value
for
AJP protocol connectors is -1 (i.e. infinite).

(You could for example try to set this to 3000 (milliseconds) or
even
lower. That should be more than enough for any legitimate client
so
send
the HTTP request line.  Note however that by doing this at the
Tomcat
level, you will probably move the problem to the Apache

httpd/mod_jk

level.  But at least it might confirm that this is the problem
that you
are
seeing.  The mod_jk logfile at the httpd level may give you some
hints
there.)


HTTP Connector : (http://tomcat.apache.org/tomc
at-8.5-doc/config/http.html#Standard_Implementation)

connectionTimeout

The number of milliseconds this Connector will wait, after
accepting a
connection, for the request URI line to be presented. Use a value
of -1
to
indicate no (i.e. infinite) timeout. The default value is 60000
(i.e.
60
seconds) but note that the standard server.xml that ships with
Tomcat
sets
this to 20000 (i.e. 20 seconds). Unless disableUploadTimeout is
set to
false, this timeout will also be used when reading the request
body (if
any).



On 26.11.2016 09:57, Jaaz Portal wrote:

hi,

sorry, its mod_jk no jk2, my typo. All at latest versions. We

tried

with
mod proxy too.

There is no flood of the server. Nobody is flooding us, they use
some
specific connections after which pool of apache workers is
exhausted
and
blocked
and we need to restart tomcat server.
It is some kind of exploit but do not know how to log it to
obtain
details.

i had put a limit on connections per client with hope that this
will
help
but once again, it is not a flood.
They open several connections that are not dropped by apache
when they
disconnect. This way whole pool is quickly exhausted and the

server

broken.

i would like to help you to figure details of this attack but
this is
production server so it is impossible to much debugging options

best,
artur

2016-11-25 23:44 GMT+01:00 Niranjan Babu Bommu <
niranjan.bo...@gmail.com

:



you can find who is flooding site in apache access.log and block
them
in

firewall.


ex to find the IP:

cat /var/log/apache2/access.log |cut -d' ' -f1 |sort |uniq

-c|sort

-gr



On Fri, Nov 25, 2016 at 8:42 AM, Jaaz Portal
<jaazpor...@gmail.com>
wrote:

hi,

we are from some weeks struggling with some Polish hackers that
are

bringing our server down. After updating apache to latest

version


(2.4.23)


and tomcat (8.0.38) available for debian systems we still cannot
secure


our


server.


Today it has stopped to respond again and we needed to restart
tomcat
process to get it back alive.

There is no too much clues in the logs. The apache error.log
gives
just
this line:

[Fri Nov 25 13:08:00.647835 2016] [mpm_event:error] [pid
13385:tid
1397934896385
92] AH00484: server reached MaxRequestWorkers setting, consider
raising

the


MaxR

equestWorkers setting

seems that somehow tomcat, mod-jk2 or even apache is
vulnerable to
some

new


exploit, as we certainly does not have such traffic that would
block

our
server otherwise

for now we have increased MaxRequestWorkers and we have limited
number
of
connections from one client to 5 by mod_bw and limited number
of
simultaneous connections from one ip by iptables but does not
know
if

this


will help


best regards,
artur




--
*Thanks*
*Niranjan*


This sounds like a variant of the slowloris attack.

This type of attack doesn't take a large number of clients or consume a
large amount of bandwidth.

Basically, the maximum number of connections are made to the server, and
just enough data is sent to each connection in order to not trigger the
timeout. André has explained this in more detail earlier in the thread.
Search for "slowloris attack" for more information.

There are several ways of mitigating this type of attack.

As André has mentioned, placing a dedicated device in front of your
systems is often the best way. Lots of benefits (platform neutral, no
stress on your current servers), and some issues (cost, placement /
access may be an issue with hosted solutions).

However, there are Apache HTTPD modules that can help mitigate these
types of attacks. Some of them are:

mod_reqtimeout (should be included by default in your Apache HTRPD 2.4)
mod_qos (quality of service module)
mod_security (application firewall with lots of security rules)

Do a quick search on "slowloris attack apache httpd 2.4" to get some
ideas.

All of them will probably place additional load on your Apache HTTPD
server, so make sure that the platform is robust enough to manage the
additional load.

There is also a beta version of the mod_security module written as a
servlet filter. It should be possible to build this and configure the
filter in Tomcat's default web.xml ($CATALINA_BASE/conf/web.xml). I've
not tried this. Also, the code base hasn't seen any activity for 3 years.

Do a quick search on "modsecurity servlet filter" to find out more about
the servlet filter version of mod_security.

In short, there appear to be some ways to mitigate the attack.

. . . just my two cents
/mde/





---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org





---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to