Hello again,

By coincidence, and since my previous email, someone has kindly submitted a 
fixed 
window rate limiting example to the NJS examples Github repo. 

https://github.com/nginx/njs-examples/pull/31/files/ba33771cefefdc019ba76bd1f176e25e18adbc67

https://github.com/nginx/njs-examples/tree/master/conf/http/rate-limit

The example is for rate limiting in http context, however I believe you'd be 
able to adapt this for stream (and your use case) with minor modifications 
(use js_access rather than 'if' as mentioned previously, setting key to a 
fixed value).

Just forwarding it on in case you need it.


On Sat, 25 Nov 2023 16:03:37 +0800
Zero King <l...@aosc.io> wrote:

> Hi Jordan,
> 
> Thanks for your suggestion. I will give it a try and also try to push 
> our K8s team to implement a firewall if possible.
> 
> On 20/11/23 10:33, J Carter wrote:
> > Hello,
> >
> > A self contained solution would be to double proxy, first through nginx 
> > stream server and then locally back to nginx http server (with proxy_pass 
> > via unix socket, or to localhost on a different port).
> >
> > You can implement your own custom rate limiting logic in the stream server 
> > with NJS (js_access) and use the new js_shared_dict_zone (which is shared 
> > between workers) for persistently storing rate calculations.
> >
> > You'd have additional overhead from the stream tcp proxy and the njs, but 
> > it shouldn't be too great (at least compared to overhead of TLS handshakes).
> >
> > Regards,
> > Jordan Carter.
> >
> > ________________________________________
> > From: nginx <nginx-boun...@nginx.org> on behalf of Zero King <l...@aosc.io>
> > Sent: Saturday, November 18, 2023 6:44 AM
> > To: nginx@nginx.org
> > Subject: Limiting number of client TLS connections
> >
> > Hi all,
> >
> > I want Nginx to limit the rate of new TLS connections and the total (or
> > per-worker) number of all client-facing connections, so that under a
> > sudden surge of requests, existing connections can get enough share of
> > CPU to be served properly, while excessive connections are rejected and
> > retried against other servers in the cluster.
> >
> > I am running Nginx on a managed Kubernetes cluster, so tuning kernel
> > parameters or configuring layer 4 firewall is not an option.
> >
> > To serve existing connections well, worker_connections can not be used,
> > because it also affects connections with proxied servers.
> >
> > Is there a way to implement these measures in Nginx configuration?
> > _______________________________________________
_______________________________________________
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx

Reply via email to