Re: nginScript + nginx 1.11.4, js_run unknown directive ?

2016-09-14 Thread George
Hi Igor thanks for the clarification. Looking forward to updated
examples/wiki for nginScript :)

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269548,269559#msg-269559

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers

2016-09-14 Thread c0nw0nk
Il test further with it but it definitely did not work with the following
using nginx_basic.exe (it was blocking the cloudflare server IP's from
connecting)

http {
#Inside http

real_ip_header CF-Connecting-IP;

limit_req_zone $binary_remote_addr zone=one:10m rate=30r/m;
limit_conn_zone $binary_remote_addr zone=addr:10m;

server {
# server domain etc here

location ~ \.mp4$ {
limit_conn addr 10; #Limit open connections from same ip
limit_req zone=one; #Limit max number of requests from same ip

mp4;
limit_rate_after 1m; #Limit download rate
limit_rate 1m; #Limit download rate
root '//172.168.0.1/StorageServ1/server/networkflare/public_www';
expires max;
valid_referers none blocked networkflare.com *.networkflare.com;
if ($invalid_referer) {
return 403;
}
}

#End server block
}

#End http block
}

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269502,269562#msg-269562

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers

2016-09-14 Thread itpp2012
A simple test from here:
http://hg.nginx.org/nginx-tests/rev/4e6d21192037
passes and works as it should even with the basic version, also have a look
at: 
http://serverfault.com/questions/409155/x-real-ip-header-empty-with-nginx-behind-a-load-balancer

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269502,269568#msg-269568

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers

2016-09-14 Thread Daniël Mostertman
On 2016-09-14 02:02, c0nw0nk wrote:
> I take it the module is a part of the Nginx.exe build and not
> Nginx_basic.exe 
The fact that nginx comes with the module, and that it is available at
build-time, does not mean it's built along.
Parameter --with-http_realip_module must be passed to configure, at
least. Not sure how it is for these builds, can't test them.

Doesn't nginx_basic.exe support the -V parameter? Does it display
configure options? Check if it's there.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers

2016-09-14 Thread Reinis Rozitis

Il test further with it but it definitely did not work with the following
using nginx_basic.exe (it was blocking the cloudflare server IP's from
connecting)


I'm not familiar with windows version of nginx .. but it's clear you have 
all the required modules.
If nginx is blocking something at least we know that the current 
configuration somewhat works.


To debug it is better to start with a minimal version.

First of all - which error is returned to the cloudflare server?
Is it 503 which would come from the limit_* modules or is it 403 which would 
come from an invalid referer?


rr 


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers

2016-09-14 Thread c0nw0nk
Yeah the reason it does not work behind CloudFlare is because the limit_conn
and limit_req is blocking the CloudFlare server IP for making to many
requests. So that is why i am reciving the DOS output "503 service
unavailable"

And I don't fancy building a whitelist of IP's since it would require
manually updating allot. The cloudflare server IP's would need excluding
from the $binary_remote_addr output.


Currently i am using my first method and it works great.

c0nw0nk Wrote:
---
> limit_req_zone $http_cf_connecting_ip zone=one:10m rate=30r/m;
> limit_conn_zone $http_cf_connecting_ip zone=addr:10m;
> 
> location ~ \.mp4$ {
> limit_conn addr 10; #Limit open connections from same ip
> limit_req zone=one; #Limit max number of requests from same ip
> 
> mp4;
> limit_rate_after 1m; #Limit download rate
> limit_rate 1m; #Limit download rate
> root '//172.168.0.1/StorageServ1/server/networkflare/public_www';
> expires max;
> valid_referers none blocked networkflare.com *.networkflare.com;
> if ($invalid_referer) {
> return   403;
> }
> }

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269502,269572#msg-269572

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers

2016-09-14 Thread B.R.
On Wed, Sep 14, 2016 at 2:23 PM, c0nw0nk 
wrote:

> Yeah the reason it does not work behind CloudFlare is because the
> limit_conn
> and limit_req is blocking the CloudFlare server IP for making to many
> requests. So that is why i am reciving the DOS output "503 service
> unavailable"
>

​Misconfiguration.
​


> And I don't fancy building a whitelist of IP's since it would require
> manually updating allot. The cloudflare server IP's would need excluding
> from the $binary_remote_addr output.
>

​Void argument.
If you did your howework, you would have realized the list provided in the
example is taken from CloudFlare's published IP address, which are also
conveniently delivered as text format to ease the job of automatic
grabbing. You'll have to choose if you want to fully automate the
verification/update of those IP addresses​

​or if you want to introduce manual check/action in the process.​

Currently i am using my first method and it works great.
>

​It has been several times you have been stating that already. There is no
point in asking for help if you won't listen to the answers.
Glad with your resource-greedy unoptimized way?​

​Fine. End of transmission.
Others who are seeking for the best practices regarding combining
limit_req, limit_rate dans the realip module will find all the information
already available.

Best of luck in your proceedings,
---
*B. R.*
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: nginx not returning updated headers from origin server on conditional GET

2016-09-14 Thread Maxim Dounin
Hello!

On Wed, Sep 14, 2016 at 02:19:25AM -0400, jchannon wrote:

> NGINX authors might want to read this thread. Essentially Mark is saying
> that this is a bug
> https://twitter.com/darrel_miller/status/775684549858697216

The fact that headers are not merged is one of the main reasons 
why proxy_cache_revalidate is not switched on by default.

As for the headers specifically mentioned in this thread:

- nginx do update Date header (actually, this is the only header 
  updated);

- nginx do not support Age header at all, see 
  https://trac.nginx.org/nginx/ticket/146.

-- 
Maxim Dounin
http://nginx.org/

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


no live upstreams and NO previous error

2016-09-14 Thread drook
Hi.

I've set up a multiple upstream configuration with nginx as a load balancer.
And yup, I'm getting 'no live upstreams' in the error log. Like in 1-3% of
requests. And yes, I know how this works: nginx is marking a backend in an
upstream as dead when he receives an error from it, and these errors are
configured with proxy_next_upstream; plus, when all of the servers in an
upstream group are under such holddowns, you will get this error. So if
you're getting these errors, basically all you need is to fix the root cause
of them, like timeouts and various 50x, and 'no live upstreams' will be long
gone.

But in my case I'm getting these like all of a sudden. I would be happy to
see some timeouts, or 50x from backends and so on. Nope, I'm getting these:

2016/09/14 20:27:58 [error] 46898#100487: *49484 no live upstreams while
connecting to upstream, client: xx.xx.xx.xx, server: foo.bar, request: "POST
/mixed/json HTTP/1.1", upstream: "http://backends/mixed/json";, host:
"foo.bar"

And in the access log these:

xx.xx.xx.xx - - [14/Sep/2016:20:27:58 +0300] foo.bar "POST /mixed/json
HTTP/1.1" 502 198 "-" "-" 0.015 backends 502 -

and the most funny thing is that I'm getting a bulk of these requests, and
previous ones are 200. It really looks like the upstream group is switching
for no reason to a dead state, and, since I don't believe in miracles, I
think that there must be a cause for that, only that nginx for some reason
doesn't log it.

So, my question is, if this isn't caused by the HTTP errors (since I don't
see the errors on the backends) - can this be caused by a sudden lack of l3
connectivity ? Like tcp connections dropped, some intermediary packet
filters and so on ?

Thanks.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269577,269577#msg-269577

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


3rd party module for generating avatars on-the-fly

2016-09-14 Thread dizballanze
Hi folks,

I am happy to announce my first module for nginx -
ngx_http_avatars_gen_module.
It uses libcairo to generate avatars based on use initials.
Check it out on github -
https://github.com/dizballanze/ngx_http_avatars_gen_module

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269578,269578#msg-269578

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Run time php variable change

2016-09-14 Thread Tseveendorj Ochirlantuu
Hello,

I try to explain what I want to do. I have website which is needed php
max_execution_time should be different on action.

default max_execution_time = 30 seconds

but I need to increase execution time 60 seconds on some location or action

http://example.com/request

Is it possible to do that on nginx to php-fpm ?

Regards
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re-balancing Upstreams in TCP Loadbalancer

2016-09-14 Thread Balaji Viswanathan
Hello Nginx Users,

I am running nginx as a TCP load balancer. I am trying to find a way to
redistribute client TCP connections to upstream servers, specifically,
rebalance the load on the upstream servers (on some event) when clients are
using persistent TCP connections.

The scenario is as follows

Application protocol -  Clients and Servers use a stateful application
protocol on top of TCP which is resilient to TCP disconnections. ie., the
client and server do application level acks and so, if some 'unit' of work
is not completely transferred. it will get retransfered by the client.

Persistent TCP connections - . The client opens TCP connections which are
persistent. With few bytes being transferred intermittently. Getting the
latest data quickly is of importance, hence i would like to avoid frequent
(re)connections (both due to connection setup overhead and varying resource
usage). Typical connection last for days.

Maintenance/Downtime - When one of the upstream servers is shutdown for
maintenance, all it's client connections break, clients reconnect and
switch to one of the remaining active upstream servers. When the upstream
is brought back up post maintenance, the load isnt redistributed. ie.,
existing connections (since they are persistent) remain with other servers.
Only new connections can go to the new server. This is more pronounced in 2
upstream server setup...where all connections switch between
serverskind of like thundering herd problem.

I would like to have the ability to terminate some/all client connections
explicitly and have them reconnect back. I understand that with nginx
maintaining 2 connections for every client, there might not be a 'clean'
time to close the connection, but since there is an application ack on
top...an unclean termination is acceptable. I currently have to restart
nginx to rebalance the upstreams  which effectively is the same.

Restarting all upstream servers and synchronizing their startup is
non-trivial. So is signalling all clients(1000s) to close and reconnect. In
Nginx, i can achieve this partially by disabling keepalive on nginx listen
port (so_keepalive=off) and then having least_conn as the load-balancer
method on my upstream. However, this is not desirable in steady state (see
persistent TCP connections above), and even though connections get evenly
distributed...the load might no be...as idle and busy clients will end up
with different upstreams.

Nginx plus features like,  "On the fly configuration" upstream_conf allows
one to change the upstream configuration, but it doesnt affect existing
connections, even if a server is marked as down. "Draining of sessions" is
only applicable to http requests and not to TCP connections.

Did anyone else face such a problem? How did you resolve it? Any pointers
will be much appreciated.

thanks,
balaji

-- 
--
Balaji Viswanathan
Bangalore
India
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: no live upstreams and NO previous error

2016-09-14 Thread drookie
(yup, it's still the author of the original post, but my other browser just
remembers another set of credentials).

If I increase verbosity of the error_log, I'm seeing additional messages in
log, like 

upstream server temporarily disabled while reading response header from


but this message doesn't explain why the upstream server was disabled. I
understand that the error occured, but what exaclty ? I'm used to see
timeouts instead, or some other explicit problem. This looks totally
mysterios for me. Could someone shine some light on it ?

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269577,269583#msg-269583

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx