"Content-Length" cannot be used in the
response.
For this to work, you'll need appropriate support in your HLS
encoder - that is, it needs to return the last segment via HTTP
while the segment is being produced. If nginx is used to proxy
such requests, everything is
d,
there is no difference between "Content-Length" and
"Transfer-Encoding: chunked" if full length of a response is known
in advance.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
be displayed incorrectly when using custom color settings in
browsers.
Thanks to Nova DasSarma.
*) Change: the logging level of the "no suitable key share" and "no
suitable signature algorithm" SSL errors ha
HLS
streaming to work, you'll have to use proxying for the last
segment (the one which is being written to).
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
and/or use the variables there as
well.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
rg/nginx/ticket/1529 for
details.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
il it gets a proper new 200 content?
Yes, as per "proxy_cache_use_stale updating;", nginx will serve
stale cached response until it will be able to update the cache.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Changes with nginx 1.15.502 Oct 2018
*) Bugfix: a segmentation fault might occur in a worker process when
using OpenSSL 1.1.0h or newer; the bug had appeared in 1.15.4.
*) Bugfix: of minor potential bugs.
--
Maxim Dounin
http://nginx.org
it is not there.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
pport neither in BoringSSL sources, nor at
boringssl-review.googlesource.com.
As far as I understand, "Support for ESNI can be found in
BoringSSL" in this article means that some ESNI patches for
BoringSSL might exist somewhere, not yet committed, and probably
not yet public.
--
Maxi
t after an error, a working
option would be to switch proxy_next_upstream off, and instead
retry requests on 502/504 errors using the error_page directive.
See http://nginx.org/r/error_page for examples on how to use
error_page.
--
Maxim Dounin
http://mdounin.ru/
wrong log file.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smtp
smtps telnet tftp
Features: AsynchDNS IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL
libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy
Upgrading curl to 7.61.1 doesn't fix things.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
tter approach
might be to use some more sophisticated logic to return such
redirects. Most simple solution would be to actually proxy
requests to the upstream servers, and let these servers to return
actual redirects to themselves.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
e nginx server you are
looking at. In particular, using tcpdump on the server to check
that requests are actually coming might be a good idea.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
g in your logs, you can
use "escape=none". Note though that without escaping you
completely depend on data the variable contain - in particular, in
your case a malicious client will be able to supply arbitrary
data, including multiple log entries or broken records.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
(.*){(.*)") { set $args $1%7B$2; }
if ($args ~ "(.*){(.*)") { set $args $1%7B$2; }
if ($args ~ "(.*)}(.*)") { set $args $1%7D$2; }
if ($args ~ "(.*)}(.*)") { set $args $1%7D$2; }
will replace up to two occurences of "{" and "}" in the request
arguments.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
ic is inherited into all nested locations, and
will be configured in "location = /sec/status" as well.
Note well that "location ~ ^/sec" in your configuration will also
match requests to "/security", "/second-version", and so on. Most
likely this is not what you want, so the above example
configuration uses "/sec/" prefix instead.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
ive", and "uwsgi_socket_keepalive" directives.
*) Bugfix: if nginx was built with OpenSSL 1.1.0 and used with OpenSSL
1.1.1, the TLS 1.3 protocol was always enabled.
*) Bugfix: working with gRPC backends might result in excessive me
ngx_http_mp4_module might result in worker process memory disclosure
(CVE-2018-16845).
*) Bugfix: working with gRPC backends might result in excessive memory
consumption.
--
Maxim Dounin
http://nginx.org/
___
nginx mailing list
nginx
he "listen" directive is
used in a configuration file.
The issues affect nginx 1.9.5 - 1.15.5.
The issues are fixed in nginx 1.15.6, 1.14.1.
Thanks to Gal Goldshtein from F5 Networks for initial report of the CPU
usage issue.
--
Maxim Dou
mp4_module.
The issue affects nginx 1.1.3+, 1.0.7+.
The issue is fixed in 1.15.6, 1.14.1.
Patch for the issue can be found here:
http://nginx.org/download/patch.2018.mp4.txt
--
Maxim Dounin
http://nginx.org/
___
nginx mailing list
nginx@ngin
re-resolve
names by using variables:
set $backend app.dc1.example.com;
proxy_pass $backend;
You won't be able to use an upstream block with multiple names
though. See here for details:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass
Note that in both cases yo
t on the server.
>
> Any help would be much appreciated.
Make sure you have properly configured ssl_protocols in the
default server for the listen socket in question. If unsure,
configure ssl_protocols at the http{} level.
Note well that testing using "openssl s_client"
n
be found in RFC 2616, "8.1.4 Practical Considerations",
https://tools.ietf.org/html/rfc2616#section-8.1.4). In this case,
a proper fix would be to improve the client.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
er it is an nginx problem in the lingering
> close automatic logic as you mentioned if I provide an example to reproduce
> it?
If 'lingering_close always;' does not help, in contrast to what
you wrote in your second message, this is certai
of trying to maintain
a fake request and return associated pushed resources.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
us complex key stores, including
hardware tokens, to access keys, though may not be trivial to
configure.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
you do not have IPv6
configured on the host).
In your particular case, writing something like
proxy_pass http://127.0.0.1:8080/files/;
with "127.0.0.1" IPv4 address explicitly used instead of
"localhost" should be enough.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
,
and then escaped again when returning a permantent redirect. But
it only escapes characters which need to be escaped.
If you want nginx to return a redirect exactly in the way you
wrote it, please consider using the "return" directive instead,
for example:
location = /brands/l-oreal {
return 301 https://somedomain.tld/L%27Or%C3%A9al-Paris/index.html;
}
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
in nginx configs.
[...]
Upgrade to nginx 1.15.3+, this problem is expected to be addressed by
this commit:
http://hg.nginx.org/nginx/rev/7ad0f4ace359
Alternatively, you can modify (and/or disable via the OPENSSL_CONF
environment variable specifically for nginx) system-wide OpenSSL
configuration
the description of the "bind" parameter of the "listen"
directive (http://nginx.org/r/listen) for additional details.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
ent in the
proxy_pass. For example:
set $backend "http://example.com";;
proxy_pass $backend;
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
pen up to 15 connections in
each worker proccess - that is, up to (15 * worker_processes) in
total, and this may be too many for your backend.
Try switching off keepalive and/or using smaller cache size to see
if it helps.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
line and/or try
"keepalive 1;" instead.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hello!
On Wed, Nov 21, 2018 at 08:53:40AM +, Rob Fulton wrote:
> On 19/11/2018 13:12, Maxim Dounin wrote:
> >
> > If you want to use variables in the proxy_pass and at the same
> > time want to preserve effect of nginx internal URI changes such as
> > due to rewrit
ut what goes wrong,
consider posting code of your module (preferably a minimal yet
working test module) and a full debugging log which demonstrates
the problem.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
dule as an example (and looking into the
development guide).
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
something?
The Host header is set to what you wrote in the "proxy_pass" by
default. That is, it will be "backend" with the above
configuration.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
you shouldn't expect that names as written
within server directives in upstream blocks means anything and
will be used for anything but resolving these names to IP addresses.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
, due to different approach to configure ciphers, "ssl_ciphers
aNULL;" will no longer work as a way to indicate no SSL support
with TLSv1.3 enabled (https://trac.nginx.org/nginx/ticket/195).
--
Maxim Dounin
http://mdounin.ru/
___
nginx mai
Hello!
On Fri, Nov 23, 2018 at 04:33:33PM +0100, Jack Henschel wrote:
> On 11/23/18 3:11 PM, Maxim Dounin wrote:
> > Hello!
> >
> > On Fri, Nov 23, 2018 at 09:23:01AM +0100, Jack Henschel wrote:
> >
> >> Hi Maxim,
> >>
> >> thanks for the
stros is to don't mess with the defaults.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
user-agent software with semantic understanding of the application
MAY substitute for user confirmation. The automatic retry SHOULD NOT
be repeated if the second sequence of requests fails.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
uchkin.
*) Bugfix: memory leak on errors during reconfiguration.
*) Bugfix: in the $upstream_response_time, $upstream_connect_time, and
$upstream_header_time variables.
*) Bugfix: a segmentation fault might occur in a worker process if the
ngx_http_mp4_module was used on
Hello!
On Wed, Nov 28, 2018 at 03:07:25AM -0500, Olaf van der Spek wrote:
> Olaf van der Spek Wrote:
> ---
> > Maxim Dounin Wrote:
> > ---
> > > Hello!
> > &g
Hello!
On Wed, Nov 28, 2018 at 02:29:26PM -0500, Olaf van der Spek wrote:
> Maxim Dounin Wrote:
> ---
> > There is no such thing as "defaults from the stock nginx.conf".
> > The nginx.conf file can be used
that there is a code which scans though the disk to
find out which items are in the cache (and how much space they
take). The cache loader process does this, see
http://nginx.org/r/proxy_cache_path for a high level description
of how it works.
--
Maxim Dounin
http://mdounin.ru/
___
y happens when returning an actual response from the
cache.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
inx.org/r/$request_body_file
- the client_body_in_file_only directive,
http://nginx.org/r/client_body_in_file_only
Also, the mirror module:
http://nginx.org/en/docs/http/ngx_http_mirror_module.html
and the embedded perl module:
http://nginx.org/en/docs/http/ngx_http_perl_module.html
might
sl_protocols" in the "http"
context, so it will be used for all servers.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
> everything except 204 (No Content)?
http://mailman.nginx.org/pipermail/nginx/2012-September/035338.html
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
esponses with codes certainly not to be compressed, simply
because they can.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
not be cached when
using the "keepalive" directive.
*) Bugfix: a segmentation fault might occur in a worker process if the
ngx_http_mp4_module was used on 32-bit platforms.
--
Maxim Dounin
http://nginx.org/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
for the port 80,
which is plain HTTP, and does not use SSL. Note
> listen 80 default_server;
is the only listening socket in this server block.
You need to configure ssl_protocols in the server{} block which is
the default for HTTPS list
ma: public
> Cache-Control: max-age=1696, public
> ETag: "4fca3f56d337d2057ce73a6dd5712d80"
> X-Powered-By: W3 Total Cache/0.9.7
> Content-Encoding: gzip
> Vary: Accept-Encoding
"Vary" may indicate that you are hitting the bug with multiple
variants. Cou
Hello!
On Thu, Dec 13, 2018 at 05:44:08PM +0200, Palvelin Postmaster via nginx wrote:
>
>
> > On 13 Dec 2018, at 16:31, Maxim Dounin wrote:
> >
> > Hello!
> >
> > On Thu, Dec 13, 2018 at 11:17:03AM +0200, Palvelin Postmaster via nginx
> > wrote:
&
_body_size is only enforced when nginx chooses
some location configuration to work with. And in your first
configuration the request is answered during processing server
rewrites, before nginx has a chance to select a location.
This is not really important though, since nginx does not try
read a request body in such a case. Rather, it will discard
anything - much like it will do when returning an error anyway.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hello!
On Fri, Dec 14, 2018 at 09:24:04AM -0800, Dave Pedu wrote:
> Hello,
>
>
> On 2018-12-14 07:34, Maxim Dounin wrote:
> > Hello!
> >
> > On Thu, Dec 13, 2018 at 09:16:12PM -0800, Dave Pedu wrote:
> >
> >> Hello,
> >>
> >>
res special
configuration, see here:
http://nginx.org/en/docs/http/websocket.html
Note well that proxying though nginx implies several additional
buffers being used anyway (two socket buffers and a proxy buffer
within nginx), and this may reduce accuracy.
--
Maxim Dounin
http://
sten), but this is unlikely to be enough in
such a setup.
Note well that measuring connection speed on the server side might
not be a good idea, as this will inevitably lead to inacurate
results.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
ge effects.
The r->single_range flag is there for a reason. Multipart range
requests are only supported if the whole response is in the single
buffer.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
e approach. That is, consider
something like this:
location / {
if ($badagent) {
return 403;
}
...
}
location /rss/ {
...
}
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@n
with OpenSSL.
*) Bugfix: in nginx/Windows.
*) Bugfix: in the ngx_http_autoindex_module on 32-bit platforms.
--
Maxim Dounin
http://nginx.org/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
this should not work? And why has the other command
>to be done again after some days?
The "ssl_stapling_file" directive needs an OCSP response obtained
from your certificate authority, not a certificate. As you are
trying to put a certificate instead, parsing expectedly fai
Hello!
On Fri, Jan 04, 2019 at 05:57:56AM +0100, ѽ҉ḳ℠ wrote:
>On 04.01.2019 05:35, Maxim Dounin wrote:
>
> The "ssl_stapling_file" directive needs an OCSP response obtained
> from your certificate authority, not a certificate. As you are
> trying to put a cer
owsers, and set caching time for
nginx manually with proxy_cache_valid (see
http://nginx.org/r/proxy_cache_valid).
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hello!
On Wed, Jan 09, 2019 at 10:52:11AM +0100, Ottavio Campana wrote:
> I am proceeding developing my module.
>
> Is there a way to get the raw HTTP request from a ngx_http_request_t ?
No. E.g., there is no such thing as "raw HTTP request" when using
HTTP/2.
--
eaders are only
used by nginx for caching. If caching is not used, these headers
are ignored anyway. See http://nginx.org/r/proxy_ignore_headers
for details.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mail
_request.c.
For HTTP/2, it is reconstructed. See ngx_http_v2_construct_request_line()
in src/http/v2/ngx_http_v2.c.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
e re-iterate:
> > Note well that it makes little to no sense to only ignore Expires
> > and Cache-Control on cached requests, since these headers are only
> > used by nginx for caching. If caching is not used, these headers
> > are ignored anyway. See http://nginx.org/r/
imilar problems. But I don't really think
this is the case, as restarting nginx usually fixes such problems.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
package as available from nginx.org are built
with OpenSSL 1.1.1.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
ndicate a clean TCP-level
connection close by the other side.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
ogging of these variables using the log_format
directive. It is also a good idea to configure logging of generic
request processing time, $request_time.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.ng
r, the $upstream_response_time variable was
introduced in nginx 0.3.8, released in 2005, and available even in
really ancient nginx versions.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
g variable is set to "gzip", because if I do:
>
> more_set_headers "Foo: /$upstream_http_content_encoding/";
>
> ...then I can see the "Foo: /gzip/" value on the client; but that does not
> help me do what I want.
>
> Can anyone su
settings and protocols used, or may require various
non-standard quirks. You may want to be more specific on what you
are trying to do.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
es. It is not in the modern
world, and if the client relies on this, it is not going to work
at all. If it is the case, the only option I can recommend would
be to use stream proxy instead, see here:
http://nginx.org/en/docs/stream/ngx_stream_core_module.html
--
Maxim Dounin
http://mdou
lude subrequest
execution time. If you want to see subrequest details in log,
including upstream times, consider the "log_subrequest"
configuration directive (http://nginx.org/r/log_subrequest).
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
fail_timeout expires. The host won't be considered fully
operational till this connection is closed without an error.
[1] http://nginx.org/en/docs/stream/ngx_stream_upstream_module.html#max_fails
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
o close them, or when a connection was evicted from the
cache by other connections.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
most importantly in
case of non-idempotent requests which cannot be retried.
The main goal of the keepalive_requests directive is to make sure
connections will be closed periodically and connection-specific
memory allocations will be freed.
--
Maxim Doun
y_download_rate" directives
in the stream module worked incorrectly when proxying UDP datagrams.
--
Maxim Dounin
http://nginx.org/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
ence between what browser shows
and what "curl -I" shows. While "curl -I" shows response headers,
browsers show the response body (or a "friendly" error page from
the browser itself in some cases). To get something comparable
with what browsers show you have to use "curl" without "-I".
Also note that browsers cache responses by default, and testing
configuration changes with browsers might be tricky.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
allocator will not be able to release no-longer-needed
memory (previously used by the original configuration) to the
system.
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hello!
On Thu, Feb 28, 2019 at 03:54:24PM -0500, wkbrad wrote:
> Maxim Dounin Wrote:
> ---
> > so allocator will not be able to release no-longer-needed
> > memory (previously used by the original configuration) to the
>
e to use proxy_pass without an
URI compontent, that is:
location / {
proxy_pass http://backend;
}
Note no trailing "/" after "backend".
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
corresponding
> "Cache-Control" and "Expires" with the 30d.
Well, your assumption is not correct. The "proxy_ignore_headers"
directive controls if nginx itself will respect Cache-Control or
not when caching a response. And the "expires" dire
r, large virtual hosting providers are known to use nginx
with small number of server{} blocks serving many different
domains. Alternatively, you may want to build nginx with less
modules compiled in, as each module usually allocates at least
basic configuration structures in each server{} / lo
resources, even if certificates are available for free.
Note well that reducing the number of server{} blocks is not the
only approach to reduce memory footprint I've outlined. If you
are using tens of thousands of server{} blocks, the amount of
compiled in modules may make a significant
uned so freed memory will be returned to
the system, see above. And for example on FreeBSD, which uses
jemalloc as a system allocator, unused memory is properly returned
to the system out of the box (though can be seen in virtual
address space occupied by the process, since the allocator uses
t; step one: check cache, if the resource is expired
> or not cached, nginx calls itself to get the resource.
> Step two: call upstream and modify the expires
> header to 30d. Return response to the cache.
> Cache is now happy with an expires 30d header :-)
Well, using double proxying wi
p?2,283216,283448#msg-283448
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hello!
On Thu, Mar 21, 2019 at 04:45:26PM +0300, Maxim Dounin wrote:
> On Wed, Mar 20, 2019 at 06:41:01PM -0400, wkbrad wrote:
>
> [...]
>
> > The first test I ran was in FreeBSD just because I was curious. Lol. But I
> > actually saw the exact same problem on it. I
can
tune or improve your system allocator to handle this better, but
see above about if it worth the effort.
If you still think this is a problem for you, and you want to save
25M of memory, you were already suggested several ways to improve
things - including re-writing your configuration,
my
malloc.conf. Without at least "junk:free" I indeed see similar
results to yours - most likely because kernel fails to free pages
which are referenced from multiple processes when madvise() is
called.
[...]
--
Maxim Dounin
http://mdounin.ru/
__
ng you may want to do is to
upgrade - nginx 1.10.3 is rather old and no longer supported.
Invesigating anything with nginx 1.10.3 hadly make sense.
[...]
--
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hello!
On Tue, Mar 26, 2019 at 05:47:51AM -0400, sivak wrote:
> Is it possible to add milliseconds in error.log
No.
> and also to include
> timestamps in the output after executing below commands
>
> $NGINX_EXECUTABLE_FILE -I
> $NGINX_EXECUTABLE_FILE -P
There is no such co
isual Studio 2015 or
newer; the bug had appeared in 1.15.9.
--
Maxim Dounin
http://nginx.org/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
1 - 100 of 2644 matches
Mail list logo