Let me ask you a question : if you have nothing in the logs, and there are no connections to your (Apache) server on port 80, then *what exactly* makes you think that you are under some kind of attack ? How do you know that it is not simply your application that is freezing up under normal usage, as opposed to an "attack" ?

This is a serious question, and please think before you answer. Provide some real information that might help us to help you. We really need some data, otherwise we may as well consult a crystal ball.
(It has been a long time since we have needed it, and I cannot remember where 
we left it)

Saying that "before, they DoS-ed bind", is not real information that we can do anything with. It may not be the same "people", it may not have anything to do with this issue that you are having. Similarly, someone "scanning your server for Apache vulnerabilities" is not real information either. That happens to any webserver on the WWW, constantly, and they do not crash for that. And they generally do not scan for "Apache vulnerabilities", they scan for application vulnerabilities or Apache bad configuration issues.

If I had to make a guess, I would say that there are probably, today, more than 100,000 webservers active on the WWW with an Apache httpd front-end, and an Apache tomcat back-end. If 0.1% of those was crashing because of some attack, that would be 100 tomcat servers crashed today, and this official tomcat users list would be full of messages about it. And that is not happening : your messages about this have been the only ones for the last week or so. So there must be something special to your webserver, or your web application, or your configuration, to make this happen. But so far, you have not given us much to really get going in finding out what is really happening.


P.S.
Having a number of TCP connections established (open), between your Apache front-end webserver and your back-and tomcat server, is normal and expected. Apache httpd + mod_jk communicate with tomcat, via TCP. The port on tomcat which is listening on those connections is 8009 (as you can see in the AJP Connector of your tomcat configuration). When Apache starts, mod_jk will automatically create a number of connections to tomcat, and keep them open for better performance.

This is what I see currently on one of my servers (where httpd and tomcat are currently alive and well, and not loaded at all) :

 Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State

tcp        0      0 127.0.0.1:43611         127.0.0.1:8009          ESTABLISHED 
20059/apache2
tcp        0      0 127.0.0.1:41927         127.0.0.1:8009          ESTABLISHED 
10379/apache2
tcp        0      0 127.0.0.1:42416         127.0.0.1:8009          ESTABLISHED 
15167/apache2
tcp        0      0 127.0.0.1:40649         127.0.0.1:8009          ESTABLISHED 
5279/apache2
tcp        0      0 127.0.0.1:40236         127.0.0.1:8009          ESTABLISHED 
533/apache2
tcp        0      0 127.0.0.1:43695         127.0.0.1:8009          ESTABLISHED 
15169/apache2
tcp6       0      0 :::8009                 :::*                    LISTEN      
2940/java
tcp6       0      0 127.0.0.1:8009          127.0.0.1:40649         ESTABLISHED 
2940/java
tcp6       0      0 127.0.0.1:8009          127.0.0.1:42416         ESTABLISHED 
2940/java
tcp6       0      0 127.0.0.1:8009          127.0.0.1:41927         ESTABLISHED 
2940/java
tcp6       0      0 127.0.0.1:8009          127.0.0.1:43695         ESTABLISHED 
2940/java
tcp6       0      0 127.0.0.1:8009          127.0.0.1:40236         ESTABLISHED 
2940/java
tcp6       0      0 127.0.0.1:8009          127.0.0.1:43611         ESTABLISHED 
2940/java

(The java process # 2940 is tomcat. Each Apache process that you see is an Apache "child" process running mod_jk. This server is not configured for a heavy load, and actually doing almost nothing right now.)

What do you see when you enter this command now ?
# netstat --tcp -pan | grep ":8009"


On 30.11.2016 20:39, Jaaz Portal wrote:
hi mark,
thanks, i have fixed configuration as you pointed out,
maybe this will mitigate the attack

before there was no connection_timeout in configuration
and this things was occurring too


best,
artur

2016-11-30 20:29 GMT+01:00 Mark Eggers <its_toas...@yahoo.com.invalid>:

Artur,

On 11/30/2016 10:41 AM, Jaaz Portal wrote:
no it looks like dos, its dos

i told you they dosed before bind server until we changed it to other
vendor,
and later was scanning my host for apache vulnerabilities

configuration is standard, the only thing i changed (after your guidance)
is connection_timeout
but this does not work for this exploit

workers.properties
worker.list=ajp13_worker

#
#------ ajp13_worker WORKER DEFINITION ------------------------------
#---------------------------------------------------------------------
#

#
# Defining a worker named ajp13_worker and of type ajp13
# Note that the name and the type do not have to match.
#
worker.ajp13_worker.port=8009
worker.ajp13_worker.host=localhost
worker.ajp13_worker.socket_timeout=60000
worker.ajp13_worker.type=ajp13
#
# Specifies the load balance factor when used with
# a load balancing worker.
# Note:
#  ----> lbfactor must be > 0
#  ----> Low lbfactor means less work done by the worker.
worker.ajp13_worker.lbfactor=1

#
# Specify the size of the open connection cache.
#worker.ajp13_worker.cachesize

#
#------ DEFAULT LOAD BALANCER WORKER DEFINITION ----------------------
#---------------------------------------------------------------------
#

#
# The loadbalancer (type lb) workers perform wighted round-robin
# load balancing with sticky sessions.
# Note:
#  ----> If a worker dies, the load balancer will check its state
#        once in a while. Until then all work is redirected to peer
#        workers.
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=ajp13_worker

------------
server.xml


  <Connector port="8009" protocol="AJP/1.3" connectionTimeout="60000"
redirectPort="8443" maxConnections="256" keepAliveTimeout="30000"/>

best,
artur

 From the following fine documentation (which André has posted before):

http://tomcat.apache.org/connectors-doc/reference/workers.html

connection_pool_timeout (lots of stuff) . . . last paragraph:

You should keep this time interval in sync with the keepAliveTimeout
attribute (if it is set explicitly) or connectionTimeout attribute of
your AJP connector in Tomcat's server.xml. Note however, that the value
for mod_jk is given in seconds, the one in server.xml has to use
milliseconds.

The last line of the above snippet of the documentation is very important..

Now let's look at your values.

 From workers.properties:
worker.ajp13_worker.socket_timeout=60000

 From server.xml
connectionTimeout="60000"

So your socket_timeout value from workers.properties is 60,000 seconds
(16 hours, 40 minutes), while your connectionTimeout value is 60,000
milliseconds (1 minute).

And your keepAliveTimeout (30,000 = 30 seconds) is not in sync with
either value.

So . . .

1. remove keepAliveTimeout from your AJP connector
2. change worker.ajp13_worker.socket_timeout to 60

This will at least get you in line with the documentation. You can then
proceed to diagnose whether you have a DOS (or DDOS) attack, an
application issue, or if this solved the problem.

. . . just my two cents (if I've done the math right)
/mde/


2016-11-30 19:21 GMT+01:00 Mark Eggers <its_toas...@yahoo.com.invalid>:

Artur,
On 11/30/2016 8:36 AM, Jaaz Portal wrote:
hi,
they has tried again with success despite setting connection_timeout
and
limiting number of clients by mod_bw
the tomcat has frozen again.

netstat does not showed any connections on port 80 but plenty of
connections from apache to localhost:8009
so it was not an attack that you has described (no slowlaris)

im looking into debug files of mod_jk and forensic for some hints. If
you
want i can share them (they have 4mb compressed)

best wishes
artur

This is beginning to look like an application or a configuration issue
and not a DOS (or DDOS) attack.

One the issues that may cause this is a mismatched timeout value between
connection_pool_timeout in workers.properties (mod_jk) and the
connectionTimeout in server.xml (Tomcat) for the AJP connector.

Also, at least for the mod_jk version that I'm running, there is no
limit for reply_timeout (mod_jk) by default.

Can you post your workers.properties file and the AJP connector portion
of your server.xml?

In the conf directory of the mod_jk source code, there is a very nice
workers.properties file that has sensible defaults. If you've not done
so, I suggest that you start with the values specified in that file, and
make sure that the timeout values match (see my comment above).

Also, when you used mod-proxy, did you use mod-proxy-ajp or
mod-proxy-http? If you used mod-proxy-ajp, then again there could be a
timeout mismatch (or no timeout specified at all).

. . . just my two cents
/mde/


2016-11-29 11:01 GMT+01:00 André Warnier (tomcat) <a...@ice-sa.com>:

On 28.11.2016 22:04, Jaaz Portal wrote:

hi Andre,
you are wrong. This vulnerability is not only causing memory leaks,
it
makes also apache workers to hang


Maybe for the last time here :

- what do you call "apache workers" ?

, making it easy to exhaust the pool.

- what do you call "the pool" ?

what i have in my log files. But it is true also that such exhaustion
can
be made by other forms of dos attacks described in this thread.

regarding you suggestion on our application, it does not dos bind
server
nether does not scan for various vulnerabilities in apache, what i
have
also in the logs


For your information : I run about 25 Internet-facing Apache
webservers
(some with a back-end tomcat, some not).
On every single one of those webservers, there are *hundreds* of such
"scans" every day, as shown by the Apache access logs.  That is just a
fact
of life on the Internet.
They are annoying, but most of them are harmless (from an "attack"
point
of view), because they are scanning for things that we do not run
(phpmyadmin, xmlrpc, vti_bin, etc., etc., the list is almost endless),
and
thus are responded to by Apache as "404 Not found".
What is annoying with those scans, is
a) that they fill the logfile, and thus make it more difficult to find
really significant things
b) that each of those requires some bandwidth and system resource, if
only
to return a "404 Not found" (or a "401 Unauthorised"), and that we pay
for
that bandwidth and resources.

If I could find a way to charge 0.1 cent per access to my servers,
from
the people who wrote or run the programs who are doing this, I could
retire
in luxury.

But they are not a real problem, because they are caught as "invalid"
by
Apache, and rejected quickly, so they cannot do anything really nasty
(except if they were sending several thousand such requests per second
to
one of my servers for a long time).

The ones that are worrying, are the ones
- a) which do /not/ end up as a "404 Not found", because they have
found
an application which responds, and they are not coming from our
legitimate
customers
- b) /the ones which we do not see/, because they either do not send a
valid HTTP request, or they have found a way to trigger one of our
applications, in such a way that the application misbehaves and,
perhaps,
even if they do not crash our servers, they may provide the attacker
with
some entry point to do other things which we do not know and do not
control

What I am trying to say here, is /do not jump to premature
conclusions/.
Such "scans" as you mention, happen to everyone, all the time, from
ever-changing IP addresses located all over the world. Some of those
"scans" may come from the infected PC of your grandmother, and she
does
not
even know about it.

There is no guarantee, and no indication or proof so far, that /these/
scans are even related to "the other thing" which happens on your
webserver, which looked much more focused.

So do not just bundle them together as being the same thing, until you
have some real data that shows for example that these different things
all
come from the same IP addresses.

And one more thing, also finally until you come back with some real
data :
I am not saying that your application "scans your server".  What I am
saying is that, maybe, by chance or by design, the attackers have
found
a
URL which goes to your application, and which causes your application
to
keep tomcat and/or Apache busy for a long time.
And that maybe /that/ is the problem you should be looking for, and
not
some hypothetical bug in Apache httpd or tomcat.



kindly regards,
artur

2016-11-28 21:33 GMT+01:00 André Warnier (tomcat) <a...@ice-sa.com>:

On 28.11.2016 20:34, Jaaz Portal wrote:

hi mark,
yes, i understand now what slowloris attack is.
maybe it was this maybe *this one based on * * mod_proxy denial of
service *
CVE-2014-0117 <http://cve.mitre.org/cgi-bin/
cvename.cgi?name=CVE-2014-0117>


You keep on saying this, but the description of that vulnerability
of
*Apache httpd*, and the symptoms which you described, *do not
match*.
You described the problem as ocurring in Apache tomcat, which in
your
case
is sitting as a back-end, *behind* Apache httpd. And restarting
tomcat
cured the problem.

The CVE above applies to Apache httpd, and describes how an attacker
could
attack Apache httpd and cause *its children* processes to crash (the
children processes of Apache httpd), by leading them to consume a
lot
of
memory and crash with an out-of-memory error.
Granted, the problem occurred in the mod_proxy module of Apache
httpd;
but
it was httpd which crashed, not tomcat.
And tomcat processes are not "Apache httpd children processes" in
any
understanding of the term.

As far as I remember, you never mentioned Apache httpd crashing. You
mentioned "the pool" getting full or satured or something like that,
without ever describing properly what you meant by "the pool".

As far as I am concerned, according to all the relatively unspecific
information which you have previously provided :
1) the attack - if that is what it is/was - is definitely NOT
related
to
the CVE which you have repeatedly mentioned
2) it is apparently not a "classical" DoS or "slowloris DoS"
directed
at
your front-end Apache. Instead, it seems that the requests are
properly
received by Apache, properly decoded by Apache, and then whatever
Apache
proxy module you are using (mod_proxy_http, mod_proxy_ajp or mod_jk)
is
properly forwarding these requests to a back-end tomcat; and it is
at
the
level of that back-end tomcat that the requests never seem to end,
and in
the end paralyse your tomcat server (and later on maybe your Apache
httpd
server too, because it is also waiting for tomcat to respond).

So your very way of describing the problem, in terms of "first we
used
this proxy module, and then they exploited the vulnerability so and
so;
then we changed the proxy module, and they exploited that too;
etc.."
seems to not have anything to do with the problem per se, and (I
believe)
confuses everyone, including yourself.

It is not that "they" exploited different vulnerabilities of various
httpd
proxy modules, one after the other. Each of these proxy modules was
doing
its job properly, and forwarding valid HTTP requests properly to
tomcat.
When you changed from one proxy module to another, you did not
really
change anything in that respect, because any proxy module would do
the
same.

But in all cases, what did not change, was the tomcat behind the
front-end, and the application running on that tomcat.  So the
presumed
attackers did not have to change anything, they just kept on sending
the
same requests, because they were really targeting your back-end
tomcat or
the tomcat application in it, no matter /how/ you were forwarding
requests
from Apache httpd to tomcat.

So either it is tomcat itself, which has a problem with some request
URLs
which do not bother Apache httpd (possible, but statistically
unlikely),
or
it is the application which runs in tomcat that has such a problem
(statistically, much more likely).

we do not know yet


we have setup more logging and are waiting for them to attack once
again


Yes, that is the right thing to do.  Before deciding what the
problem
may
be, and what you can do about it, the first thing you need is
*data*.
You
need to know
- which request URL(s?) cause that problem
- which IPs these requests come from (always the same ? multiple IPs
that
change all the time ? how many ? can these IPs be valid/expected
clients
or
not ? do these IPs look like some "coordinated group" ?)
- how many such requests there may be during some period of time
(10,
100,
1000, more ?)
- if these URLs result in passing the request to tomcat
- what tomcat application (if any) they are directed to
- if so, when that application receives such a request, what is it
supposed to do ? does it do it properly ? how long does it need, to
respond
to such a request ?

You also need to ask yourself a question : is the application which
you
run inside tomcat something that you designed yourself (and which
hackers
are unlikely to know well-enough to find such a URL which paralyses
your
server) ? or is it some well-known third-party java application
which
you
are running (and for which would-be attackers would be much more
likely
to
know exactly such a bug) ?


anyway, thank you for all informations, it was very useful and
educational
reading for all of us

best wishes,
artur

2016-11-28 19:46 GMT+01:00 Mark Eggers
<its_toas...@yahoo.com.invalid
:

Jaaz,


On 11/27/2016 2:46 PM, André Warnier (tomcat) wrote:

On 27.11.2016 19:03, Jaaz Portal wrote:

2016-11-27 18:30 GMT+01:00 André Warnier (tomcat) <a...@ice-sa.com
:

On 27.11.2016 14:26, Jaaz Portal wrote:


hi,

everything i know so far is just this single log line that
appeared
in
apache error.log

[Fri Nov 25 13:08:00.647835 2016] [mpm_event:error] [pid
13385:tid
1397934896385
92] AH00484: server reached MaxRequestWorkers setting,
consider

raising


the

MaxR
equestWorkers setting

there was nothing else, just this strange line


This is not a "strange" line. It is telling you something.
One problem is that you seem convinced in advance, without
serious
proof,
that there is a "bug" or a vulnerability in httpd or tomcat.
Read the explanation of the httpd parameter, here :
http://httpd.apache.org/docs/2.4/mod/mpm_common.html#maxrequ
estworkers
and try to understand it.


I understand it very well. We are serving no more that 500
clients
per
day
so there was no other option that some kind of attack.


About the "bug" or "vulnerability" : a webserver is supposed to
serve
HTTP

requests from clients.  That is what you expect of it. It has no
choice but
to accept any client connection and request, up to the maximum
it
can
handle considering its configuration and the capacity of the
system
on
which it runs. That is not a bug, it is a feature.


We have some weeks ago come under attack from some Polish
group.
It

was
first bind that was DoS'ed, we was running on stable Debian so i
updated
bind to latest version. It did not helped. They has dos'ed it so
we
switched to other dns provider. That has helped.

Then they exploited some well know vulnerability in mod_proxy.
We
have
updated apache to the latest but again they has exploited it, so
we
have
switched to mod_jk. And then guess what. They exploited it too
so
i
decided
to write to this list looking for help before trying jetty.



The normal Apache httpd access log, will log a request only when
it
is
finished.  If the request never finishes, it will not get
logged.
That may be why you do not see these requests in the log.
But have a look at this Apache httpd module :
http://httpd.apache.org/docs/2.4/mod/mod_log_forensic.html


ok, thanks, will take care

Note : that is also why I was telling you to enable the mod_jk
log,
and to

examine it.
Because mod_jk will also log information before the request
produces a
response.


and server hanged.

Again, /what/ is "hanged" ? Apache httpd, or tomcat ?


Apache was accepting connection but not processed it. After
restart
of
tomcat server it worked again.


Also in


access logs there are no clues that it was under any heavy
load.


after around hour after discovering that our server hanged-out
we
have
restarted tomcat server
and it worked again


Yes, because that will close all connections between Apache
httpd
and
tomcat, and abort all requests that are in the process of being
processed
by tomcat. So mod_jk will get an error from tomcat, and will
report
an
error to httpd, and httpd will communicate that error to the
clients,
and
close their connection.
It still does not tell you what the problem was.
The only thing that it suggests, is that the "bad" requests
actually
make
it all the way to tomcat.


correct

i will enable logs that you has pointed out and we will look
what
i
will
catch
however i think we have only one chance, as if the solution we
has
found
out (connection_timeout + mod_bn)
will work they will stop exploiting it

thank you very much for all the help and explanations
i will report to the list new facts once they will attack us
again

best regards,
artur


Ok, but also read this e.g. :
https://www.corero.com/blog/695-going-after-the-people-

behind-ddos-attacks.html


Attempts to bring down a site by DoS attacks is a crime, in most
places..

You can report it, while at the same time trying to defend
yourself
against it.

It is also relatively easy, and quite inexpensive in terms of
system
resources, to run a small shell script which takes a list every
few
seconds of the connections to the port of your webserver, and
which
IPs
they are coming *from*.
E.g.
First try the netstat command, to see what it lists, like :
# netstat -n --tcp | more

Then you will want to filter this a bit, to only consider
established
connections to your webserver, for example :
# netstat -n --tcp | grep ":80" | grep "ESTABLISHED"

Then you will want to send this to a logfile, regularly, like
this
:

# date >> some_file.log
# netstat -n --tcp | grep ":80" | grep "ESTABLISHED" >>
some_file.log
(repeat every 3 seconds)

This will not generate GB of logfiles, and it will tell you, when
the
problem happens, how many connections there are exactly to your
webserver, and where they are coming from.
Then later you can further analyse this information..



i think that setting connection-timeout and limiting the number
of

clients
by mod_bd i will
have effect that next time somebody will try this exploit it
will

block


his

access to the site
for minute or two, pretty good holistic solution i would say

still, it seems that there is serious vulnerability somewhere
in
apache,
mod_jk or tomcat
i would like to help find it out but need some hints which
debug
options
enable to catch the bad guys
when they will try next time

best regards,
artur

2016-11-27 13:58 GMT+01:00 André Warnier (tomcat) <
a...@ice-sa.com>:

On 27.11.2016 13:23, Jaaz Portal wrote:


hi Andre,

thank you very much this was very educative but in my case it
is
little
bit
different.
The server is no flooded, there is maybe dozen of very
sophisticated
connections that somehow hangs apache workers threads


Can you be a bit more specific ?

When you say "apache workers threads", do you mean threads in
Apache
httpd, or threads in Apache Tomcat ? (both are Apache
webservers,
so it
is
difficult to tell what you are talking about, unless you are
very
precise).

Let me give you some additional explanations, and maybe you
can

figure


out

exactly where it "hangs".

     From the Apache httpd front-end point of view, mod_jk
(the
connector to
Apache Tomcat) is basically one among other "response
generators".
Apache
httpd does not "know" that behind mod_jk, there is one or
more
Tomcat
servers.  Apache httpd receives the original client request,
and
depending
on the URL of the request, it will pass it to mod_jk or to
another
response
generator, to generate the response to the request.
That mod_jk in the background is using a Tomcat server to
actually
generate the response, is none of the business of Apache
httpd,
and

it


does

not care. All it cares about, is to actually receive the
response

from


mod_jk, and pass it back to the client.


If httpd passes a request to mod_jk, it is because you have
specified in
the Apache configuration, the type of URL that it should pass
to
mod_jk..
That happens via your "JkMount (URL pattern)" directives in
Apache
httpd.

Of course Apache httpd will not pass a request to mod_jk,
before
it
has
received at least the URL of the request, and analysed it to

determine


*if*

it should pass it to mod_jk (*).

If the mod_jk logging is enabled, you can see in it, exactly
*which*
requests are passed to mod_jk and to Tomcat.
Do you know *which* requests, from which clients, cause this
"thread
hanging" symptom ?
Once you would know this, maybe you can design a strategy to
block
specifically these requests.

and the effect is permanent. Quickly the pool is exhausted



which pool exactly ?


and the only

solution that works is to restart tomcat.


I think it is a bug similar to this one from mod_proxy:
https://tools.cisco.com/security/center/viewAlert.x?
alertId=
34971


Maybe, maybe not. As long as we do not know what the
requests
are,
that

block things, we do not know this.


I think also that your solution with setting
connectionTimeout
will
solve

the problem, at least partially. THANK YOU.



Same thing, we do not know this yet.  It was only giving
this

explanation,
to help you think about where the problem may be.


I would like to help you further investigate this issue, as
our

server


comes under such attack once or twice in a week.



Other than giving you hints, there is not much I or anyone
else
can do

to
help. You are the one with control of your servers and
logfiles,
so
you
have to investigate and provide more precise information.


(*) actually, to be precise, Apache httpd passes *all*
requests to
mod_jk,
to ask it "if it wants that request". mod_jk then accepts or

declines,


depending on the JkMount instructions. If mod_jk declines, then


Apache


httpd will ask the next response generator if it wants this
request,

etc...






best regards,

artur


2016-11-27 12:46 GMT+01:00 André Warnier (tomcat) <
a...@ice-sa.com
:

Hi.


Have a look that the indicated parameters in the two pages
below..

You may be the target of such a variant of DDoS attack :
many
clients
open
a TCP connection to your server (front-end), but then never
sends
a
HTTP
request on that connection.  In the meantime, the server
accepts

the


TCP

connection, and passes it on to a "child" process or thread for
processing.  The child then waits for the HTTP request line
to
arrive
on
the connection (during a certain time), but it never
arrives.
After a
while, this triggers a timeout (see below), but the
standard
value of
that
timeout may be such that in the meantime, a lot of other

connections


have

been established by other such nefarious clients, so a lot of
resources
of
the webserver are tied up, waiting for something that will
never
come..
Since there is never any real request sent on the
connection,
you
would
(probably) not see this in the logs either.

The above is the basic mechanism of such an attack.  There
may
be
variations, such as the client not "not sending" a request
line,

but


sending it extremely slowly, thus achieving perhaps similar kinds


of


effects.


As someone pointed out, it is quite difficult to do
something
about
this
at the level of the webserver itself, because by the time
you
would do
something about it, the resources have already been
consumed
and
your
server is probably already overloaded.
There are specialised front-end devices and software
available,
to
detect
and protect against this kind of attack.

You may want to have a look at the following parameters,
but
make
sure
to
read the caveats (side-effects, interlocking timeouts
etc.),
otherwise
you
may do more harm than good.

Another thing : the settings below are for Apache Tomcat,
which
in your
case is the back-end. It would of course be much better to
detect
and
eliminate this at the front-end, or even before.  I had a
look
at
the
Apache httpd documentation, and could not find a
corresponding
parameter.
But I am sure that it must exist. You may want to post this
same
question
on the Apache httpd user's list for a better response.

Tomcat configuration settings :

AJP Connector : (http://tomcat.apache.org/tomc
at-8.5-doc/config/ajp.html#
Standard_Implementations)

connectionTimeout

The number of milliseconds this Connector will wait, after
accepting a
connection, for the request URI line to be presented. The
default
value
for
AJP protocol connectors is -1 (i.e. infinite).

(You could for example try to set this to 3000
(milliseconds) or
even
lower. That should be more than enough for any legitimate
client
so
send
the HTTP request line.  Note however that by doing this at
the
Tomcat
level, you will probably move the problem to the Apache

httpd/mod_jk


level.  But at least it might confirm that this is the problem

that you
are
seeing.  The mod_jk logfile at the httpd level may give you
some
hints
there.)


HTTP Connector : (http://tomcat.apache.org/tomc
at-8.5-doc/config/http.html#Standard_Implementation)

connectionTimeout

The number of milliseconds this Connector will wait, after
accepting a
connection, for the request URI line to be presented. Use a
value
of -1
to
indicate no (i.e. infinite) timeout. The default value is
60000
(i.e.
60
seconds) but note that the standard server.xml that ships
with
Tomcat
sets
this to 20000 (i.e. 20 seconds). Unless
disableUploadTimeout
is
set to
false, this timeout will also be used when reading the
request
body (if
any).



On 26.11.2016 09:57, Jaaz Portal wrote:

hi,

sorry, its mod_jk no jk2, my typo. All at latest versions..
We


tried


with

mod proxy too.

There is no flood of the server. Nobody is flooding us,
they
use
some
specific connections after which pool of apache workers is
exhausted
and
blocked
and we need to restart tomcat server.
It is some kind of exploit but do not know how to log it
to
obtain
details.

i had put a limit on connections per client with hope that
this
will
help
but once again, it is not a flood.
They open several connections that are not dropped by
apache
when they
disconnect. This way whole pool is quickly exhausted and
the

server


broken.


i would like to help you to figure details of this attack
but
this is
production server so it is impossible to much debugging
options

best,
artur

2016-11-25 23:44 GMT+01:00 Niranjan Babu Bommu <
niranjan.bo...@gmail.com

:



you can find who is flooding site in apache access.log
and
block

them
in

firewall.


ex to find the IP:

cat /var/log/apache2/access.log |cut -d' ' -f1 |sort
|uniq

-c|sort


-gr




On Fri, Nov 25, 2016 at 8:42 AM, Jaaz Portal
<jaazpor...@gmail.com>
wrote:

hi,

we are from some weeks struggling with some Polish
hackers
that
are

bringing our server down. After updating apache to latest

version



(2.4.23)


and tomcat (8.0.38) available for debian systems we
still
cannot
secure


our


server.


Today it has stopped to respond again and we needed to
restart
tomcat
process to get it back alive.

There is no too much clues in the logs. The apache
error.log
gives
just
this line:

[Fri Nov 25 13:08:00.647835 2016] [mpm_event:error] [pid
13385:tid
1397934896385
92] AH00484: server reached MaxRequestWorkers setting,
consider
raising

the


MaxR

equestWorkers setting


seems that somehow tomcat, mod-jk2 or even apache is
vulnerable to
some

new


exploit, as we certainly does not have such traffic that
would
block

our

server otherwise

for now we have increased MaxRequestWorkers and we have
limited
number
of
connections from one client to 5 by mod_bw and limited
number
of
simultaneous connections from one ip by iptables but
does
not
know
if

this


will help


best regards,
artur




--

*Thanks*
*Niranjan*


This sounds like a variant of the slowloris attack.

This type of attack doesn't take a large number of clients or
consume a
large amount of bandwidth.

Basically, the maximum number of connections are made to the
server,
and
just enough data is sent to each connection in order to not
trigger
the
timeout. André has explained this in more detail earlier in the
thread.
Search for "slowloris attack" for more information.

There are several ways of mitigating this type of attack.

As André has mentioned, placing a dedicated device in front of
your
systems is often the best way. Lots of benefits (platform neutral,
no
stress on your current servers), and some issues (cost, placement
/
access may be an issue with hosted solutions).

However, there are Apache HTTPD modules that can help mitigate
these
types of attacks. Some of them are:

mod_reqtimeout (should be included by default in your Apache HTRPD
2.4)
mod_qos (quality of service module)
mod_security (application firewall with lots of security rules)

Do a quick search on "slowloris attack apache httpd 2.4" to get
some
ideas.

All of them will probably place additional load on your Apache
HTTPD
server, so make sure that the platform is robust enough to manage
the
additional load.

There is also a beta version of the mod_security module written
as a
servlet filter. It should be possible to build this and configure
the
filter in Tomcat's default web.xml ($CATALINA_BASE/conf/web.xml)..
I've
not tried this. Also, the code base hasn't seen any activity for 3
years.

Do a quick search on "modsecurity servlet filter" to find out more
about
the servlet filter version of mod_security.

In short, there appear to be some ways to mitigate the attack.

. . . just my two cents
/mde/












---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to