Actually in my nginx.conf among the tweaks to cache, limit and speed up
figure the followings:

# backend cache
proxy_temp_path directive
proxy_cache_[*] directives

# file cache
open_file_cache[*] directives

# connection limits
limit_conn_zone [*] directives
limit_conn conn_per_[*] directives
limit_req_zone directives
limit_req directive

# keepalive
keepalive_[*]
log_subrequest off;
ignore_invalid_headers on;

# compression
gzip on;
gzip_[*] directives;

# speed everything up
aio [I believe linux support only]
directio
output_buffers
[server response port >reuse< attribute in place]


Other stuff to underline:
1. I suggest you to fine tune your <http> section and not lock down your
single web apps by limiting single <server> sections; I mean this
decision in someways can be destructive if you plan to be live with
your stuff under attack;
2. scripts from Facebook (yah, those ones that log your link as your
type it or the url previews) and some others friends are able to
reimagine your network by getting about your "transit" or local hop
internal ips. If you dont't like that: deny 192.168.0.0/16 or whatever;


Hope this somehow..helps,


Dan

------
bsdload.com - Repo: https://code.5mode.com

Please reply to the mailing-list, leveraging technical stuff.


Dan <d...@nnnne-o-o-o.com> wrote:

> 
> Ancidentally, I'm also running recently into these kind of problems
> with my Splash engine (now stopped)
> code.5mode.com (https://5mode.net/l/ddos1)
> 
> However my log for code. reports "just" 12 server errors in 1 week..
> 
> Obviously target of these gentlemen are the few web apps heavy
> dependent on db layers.
> 
> I work on nginx as frontend as well, tweaked (but some tweaks work on
> Linux only) and templetized. Happy to share with you eventually.
> 
> Stuart: Did you maybe mean filter referers by regex? Well, thats
> can't be be the cure..
> 
> 
> Dan
> 
> ------
> bsdload.com - Repo: https://code.5mode.com
> 
> Please reply to the mailing-list, leveraging technical stuff.
> 
> 
> 
> Stuart Henderson <stu.li...@spacehopper.org>:
> 
> > On 2025-03-15, Kirill A  Korinsky <kir...@korins.ky> wrote:
> >> On Fri, 14 Mar 2025 23:33:45 +0100,
> >> Nick Holland <n...@holland-consulting.net> wrote:
> >>>
> >>> As you may have noticed, cvsweb.openbsd.org has been having
> >>> issues.  This time, it is due to effectively a Distributed Denial
> >>> of Service, though I don't actually believe it is /deliberately/
> >>> malicious.  Speculation is someone is trying to feed a so-called
> >>> AI application from cvsweb.  While I admire the idea of training
> >>> an AI from the work of some of the best programmers in the world,
> >>> cvsweb is a perl script that writes a lot of temp files.  The
> >>> current system is many times the first cvsweb HW I set up many
> >>> years ago, and won't even notice humans using it, when hundreds
> >>> of simultaneous automated queries are happening, things get bad
> >>> quickly.
> >>>
> >>> FOR NOW, I've stopped the ability of cvsweb to show diffs of file
> >>> revisions.  This is where both much of the abuse was happening,
> >>> and also much of the load on the system came from.
> >>> YES, that's horribly annoying, but you can still download any
> >>> individual version of a file and you can still see the annotated
> >>> output.  I'll be thinking about a longer-term solution (which may
> >>> also be "wait until they get bored and move on").
> >>>
> >>
> >> Sounds like Nginx as frontend with enabled cache should help.
> >
> > Unlikely that a cache will help, there are a *lot* of revisions to
> > show diffs of...
> >
> > However nginx would allow blocking user agents by regex (and also
> > would avoid another problem that these sites run into from time..)
> 





Reply via email to