I'm going to try to and create an integration test for it so that I can show
the setup doing the unexpected 500ms locking for stale+1 requests. I can then
set the debug level and if I can't figure it out I will post back here with the
integration tests.
The reason I posted on the devel list was
Hi, I had a same issue in those days.
Did you try the proxy_cache_lock_timeout?
https://forum.nginx.org/read.php?2,276344,276349#msg-276349
But the below article said if you reduce simply the once busy loop time, it
may not resolve this problem for which based on the nginx event
notification mech
Hello!
On Fri, Mar 24, 2023 at 09:24:25AM +0100, Roy Teeuwen wrote:
> You are absolutely right, I totally forgot about the cache_lock.
> I have listed our settings below.
>
> The reason we are using the cache_lock is to save the backend
> application to not get 100's of requests when a stale i
Hey Maxim,
You are absolutely right, I totally forgot about the cache_lock. I have listed
our settings below.
The reason we are using the cache_lock is to save the backend application to
not get 100's of requests when a stale item is invalid. Even if we have
use_stale updating, we notice that
Hello!
On Thu, Mar 23, 2023 at 09:26:48AM +0100, Roy Teeuwen wrote:
> We are using NGINX as a proxy / caching layer for a backend
> application. Our backend has a relatively slow response time,
> ranging between the 100 to 300ms. We want the NGINX proxy to be
> as speedy as possible, to do thi
Hey,
We are using NGINX as a proxy / caching layer for a backend application. Our
backend has a relatively slow response time, ranging between the 100 to 300ms.
We want the NGINX proxy to be as speedy as possible, to do this we have
implemented the following logic:
- Cache all responses for