I’m wondering if you are overthinking this. You said that the memory was reused
when the workload increased again. Linux memory management is unintuitive. What
would happen if you used a different metric, say # active connections, as your
autoscaling metric? It sounds like this would behave “bet
I’d suggest that you use wrk2, httperf, ab or similar to run a synthetic test.
Can your site handle one request every five seconds? One request every second?
Five every second? ... is your backend configured to log service times? Is your
nginx configured to log service times? What do you see? By
Andreas,
Do you know of any large, high traffic sites that are using HSTS today?
Peter
> On Jun 5, 2019, at 12:56 PM, A. Schulze wrote:
>
>
>
> Am 05.06.19 um 14:54 schrieb Sathish Kumar:
>> Hi Team,
>>
>> We would like to fix the HTTPS pinning vulnerability on our Nginx and Mobile
>> appl
Mik,
I’m not going to get into the openbsd question, but I can tell you some of the
different things that I have done to solve this kind of problem in the past.
Your environmental constraints will impact which is feasible:
1. Use tcpdump to capture packets
2. Use netcat as an intercepting proxy
Where is your upstream? Where is your pho executing? Do you have a CDN?
There’s three parts to this:
1 fix the bad OS defaults:
If you are using RHEL 6 this would mean:
Enabling tuned
Disabling THP
Increasing vm.min_free_kbytes
Reducing swappiness
2 generic web server specific configuration
Increa
I don’t think it’s a dumb question at all. It’s a very astute question.
My experience of protecting a high traffic retail website from a foreign
state-sponsored DDOS was that doing IP blocking on a hardware load bakancer in
front of the nginx tier was the difference between the site bring avail
Here’s my opinion:
You can do this however you want. It’s your website. Most of my work has been
for other people. When I was working on my own startup it made me honest.
Nothing was dynamic. The rationale was “do everything ahead of time so users
never wait for anything and the site has 100%
Perhaps I’m naive or just lucky, but I have used nginx on many contracts and
permanent jobs for over ten years and have never attempted to reload
canfigurations. I have always stopped then restarted nginx instances one at a
time. Am I not recognizing a constraint that affects other people?
Curi
Jon,
You need to find out what is “true”. From the perspective of nginx,
this post request took 3.02 secs - but where was the time actually spent?
Do you have root access on both your nginx host and your upstream host
that is behind your elastic load balancer? If so, you can run a filtered
tcpdu
Satish,
The browser (client-side) cache isn’t related to the nginx reverse proxy cache.
You can tell Chrome to not cache html by adding the following to your location
definition:
add_header Cache-Control 'no-store';
You can use Developer Tool in Chrome to check that it is working.
Peter
Sent
+1 to the openresty suggestion
I’ve found that whenever I want to do something gnarly or perverse with nginx,
openresty helps me do it in a way that’s maintainable and with any ugliness
minimized.
It’s like nginx with super-powers!
Sent from my iPhone
> On Feb 11, 2019, at 1:34 PM, Robert Pap
You are specifying a key zone that can hold about 80 million keys,
and three level cache. Do you really have that many cached files?
Unless you are serving petabytes of content, I’d suggest reverting your
settings to default values
and running some test cases to validate correct caching behavior
You should be able to answer this by tailing the log of your nginx and orig
server at the same time.
It would be helpful if you shared an (anonymized) section of both logs. When I
say fast or slow
I might mean something very different to what you hear.
> On 11 Feb 2019, at 10:06 AM, joao.pere
Open this and you will see that a request to https://digitalkube.com/ returns a
301 pointing back to itself.
Check your CDN configuration
https://redbot.org/?uri=https%3A%2F%2Fdigitalkube.com%2F
Sent from my iPhone
> On Jan 28, 2019, at 11:47 AM, Gary wrote:
>
> Log files? Nginx.conf file? Y
Petrosetta,
Question is your nginx server running on the same host as your owin / IIS
server?
With OWIN / IIS listening only on port 80 and nginx only on port 443?
And both listening on the physical NIC (not localhost) and no firewall?
It looks as though you are wanting to do SSL termination an
If you use the openresty nginx distribution then you can write a few lines of
Lua to implement your custom logic.
Sent from my iPhone
> On Jan 13, 2019, at 9:13 AM, shahzaib mushtaq wrote:
>
> Hi,
>
> We've a location like /school for which we want to set browser cache lifetime
> as 'current
Is your nginx/Apache site visible on the internet without any authentication?
If so, I recommend that you access your site directly, not through cloud flare
with redbot.org, which is the best HTTP debugger ever, for both the nginx and
Apache versions of the site and see how they compare.
Why is
1. What does GET / return?
2. You said that nginx was configured as a reverse proxy. Is / proxied to a
back-end?
3. Does GET / return the same content to different users?
4. Is the user-agent identical for these suspicious requests?
Sent from my iPhone
> On Jan 10, 2019, at 11:19 PM, gnusys wr
How do you know that this is an attack and not “normal traffic?”
How are these requests different from regular requests?
What do the weblogs say about the “attack requests?"
> On 10 Jan 2019, at 10:30 PM, gnusys wrote:
>
> My Current settings are higher except the worker_process
>
> worker_pro
Your web server logs should have the key to solving this.
Do you know what url was being requested? Do the URLs look valid?
Are there requests all for the same resource?
Are the requests coming from a single IP range?
Are the requests all coming with the same user-agent?
Does the time this starte
The important question here is not the connections in FIN_WAIT. It’s “why do
you have so many sockets in ESTABLISHED state?”
First thing to do is to run
netstat -ant | grep tcp and see where these connections are to.
Do you have a configuration that is causing an endless loop of requests?
Sent
nts are difficult to be generated in advance such as
> the the contents of some search results. There are lots of search words for a
> website for which there are too many result items. How should we handle this
> issue?
>
>
>
> At 2018-11-02 21:16:18, "Peter Booth via ngin
So this is a very interesting question. I started writing dynamic websites in
1998. Most developers don’t want to generate static sites. I think their
reasons are more emotional than technical. About seven years ago I had two jobs
- the day job was a high traffic retail fashion website. the side
e have the script? My problem is intermittent and I don’t
> know if it’s a good idea to actively listen to production logging.
>
>
>
>
> On Sat, Oct 6, 2018 at 3:21 PM Peter Booth via nginx <mailto:nginx@nginx.org>> wrote:
> You need to understand what requests are b
You need to understand what requests are being received, what responses are
being sent and the actual keys being used to write to your cache.
This means intelligent request logging, possibly use of redbot.org, and
examination of your cache. I used to use a script that someone had posted here
y
One more approach is to not change the contents of resources without also
changing their name. One example would be the cache_key feature in Rails, where
resources have a path based on some ID and their updated_at value. Whenever you
modify a resource it automatically expires.
Sent from my iPho
Quintin,
Are most of your requests for dynamic or static content?
Are the requests clustered such that there is a lot of requests for a few
(between 5 and 200, say) URLs?
If three different people make same request do they get personalized or
identical content returned?
How long are the cached r
So it’s very easy to get caught up in he trap if having unrealistic mental
models of how we servers work when dealing with web servers. If your host is a
recent (< 5 years) single Dickey host then you can probably support 300,000
requests per second fir your robots.txt file. That’s because the f
I’ve tried chef, puppet and ansible at thre different shops. I wanted to like
chef and puppet because they are Ruby based (which I like) but they seemed
clunky, ugly, and heavyweight. Ansible seemed to solve the easy problems. When
I had a startup I just used Capistrano for deployments, with erb
29 matches
Mail list logo