> but https://15.15.15.15/ throw error "Not found: The requested URL / was not
> found on this server. ", And I can not find the error. This is the
> configuration:
> and urls.py django:
>
> urlpatterns = [
> path('', RedirectView.as_view(url='/inicio/', permanent=True)),
> path('inicio/',
> I've also tried adding "/" and throwing the same error. I have also added to
> the
> .conf file:
>
> location = / {
> include proxy_params;
> proxy_pass http://unix:/run/gunicorn.sock; }
>
> before the fragment location / {..} with the same error.
> This error is very stran
> I added include for the location config files may it makes it better readable
> but still no clue hoiw to reach UNIX socket proxied webserver in LAN.
It's a bit unclear what is the problem or what you want to achieve?
The nginx can't connect/proxy_pass to the socket files (what's the error)?
> I have a Synology NAS what runs a nginx as default web server to run all
> their apps. I would like to extend it to meet the following.
>
> The purposes is that if the useraccount webapp1 is compromised, it will only
> affect webaoos1's web server.. and repeat this for all
> accounts/website
> First domain redirects port 80 to ssl 443.
> Second domain is just port 80.
> Third domain is just port 80.
>
> Second domain isn’t showing up, pointing to first domain. Third domain is
> working. Why would this happen?
If you are testing just with a browser make sure you've cleaned the cache
> how do I do it eaxtly regardless if it is cumbersome?.
Well you configure each individual nginx to listen (
https://nginx.org/en/docs/http/ngx_http_core_module.html#listen ) on a unix
socket:
Config on nginx1:
..
events { }
http {
server {
listen unix:/some/path/user1.sock;
..
}
> Indeed, with further tests I think that the stapling is working...
> sometimes.
>
>
> I'm not using the staple file, though. Is this behavior expected without such
> configuration? Also, I've enabled ssl_early_data.
Each nginx worker has it's own cache.
Depending on your worker_processes you m
> This allows permission management via user accounts but it can can get bulky
> as soon as you set up user accounts for permission management of each backend
> application, as they pose a higher risk, as indicated in the previous email
Well you asked how to proxy unix sockets...
> that is al
> so all goes in the same nginx.conf but in different http{} block or do I need
> one nginx.conf for each, the user unix sockets and also the parent proxy
> server?
A typical nginx configuration has only one http {} block.
You can look at some examples:
https://nginx.org/en/docs/http/request_p
> This is my NGINX directory lists and i dont see the "Modules" directory. Is
> that
> normal?
Yes, that's normal. By default nginx compiles everything into executable so
unless you build dynamic modules (--add-dynamic-module) there won't be any .so
files.
> Can anyone share a configuration f
> That is a very Gmail specific solution but, thanks god, not anyone is using
> Gmail.
It's not gmail specific option..
Most MTAs (if not all) (like for example Postfix (recipient_delimiter) / exim
(local_part_suffix) etc ) support the 'user+tag@..' feature.
> My main goal is to improve read
> For example, a request to https://test.example.org/bla/fasel would deliver
> the content for https://foo.example.org/bla/fasel. So basically it delivers
> content for the wrong subdomain. Those occasions are very, very rare and
> totally random in regards to the subdomain from which the content g
> Is there any directive available in Nginx to set a timeout between two
> successive receive or two successive send network input/output operations
> during the HTTP request response phase?
For send:
http://nginx.org/en/docs/http/ngx_http_core_module.html#send_timeout
for read:
http://nginx.org/
> Wondering if there is a way to use the URL endpoint to check.
One way to do it would be with:
http://nginx.org/en/docs/http/ngx_http_auth_request_module.html
You said you have nginx+ which has the ability to reconfigure the the backends
on the fly: http://nginx.org/en/docs/http/ngx_http_upst
> 2. command line
> Slowly I being understanding reading Starting, Stopping, and Restarting NGINX
> and Controlling NGINX Processes at Runtime. If you run
> multiple instances you use the signal -s with a reference to the desired PID
> otherwise it will pick the default on what is /var/run/nginx
> In this case the ENV var "VHOST1_TOKEN" is corrrectly defined in the match
> case, but it is also *defined*, though null, for any/all other vhosts.
>
> How do I construct this conditional ENV var setting to ONLY set/defined the
> vars on host match?
You can add if_not_empty at the end of the
> On my logs I can see that HIT's are very fast but STALEs take as much as MISS
> while I believe they should take as much as HITs.
>
> Is there something I can do to improve this ? Are the stale responses a true
> "stale-while-revalidate" response ?or are they waiting for the response from
> the
> X-MShield-Cache-Status: STALE
> 0.004329:0.00:0.004364:0.00:0.212526:0.212644
I see according to the timings you hit the 200ms tcp_nopush delay.
Try setting tcp_nopush off;
For more explanation you can read up
https://forum.nginx.org/read.php?2,280434,280462#msg-280462
rr
_
> after applying tcp_nopush off, the test that we have in place is working as
> expected. The problem is that this improvement is not happening on production.
Our production environment is mainly a CDN -> NGinx -> Origin. We want to use
Nginx in order to control the eviction time of the content (
> We use Nginx as a reverse proxy to our application servers, can I intercept
> this header and just remove it?
Sure, to the proxy_pass block add:
proxy_hide_header upgrade-insecure-request;
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header
rr
>
> I've added the following rewrite line in the Config website:
>
> rewrite ^tagged\/(.*)$ /?p=blog&blog_tag_name=$1 break;
>
> It is supposedly called by the following URL: /tagged/Server be redirected to
> the
> following internally: /blog?blog_name=&blog_tag_name=Server
If the rewrite dir
> My question still stands though, is there a way to solve that particular
> issue? It is
> causing us problems when the ram that Nginx is using doubles.
Theoretically if that’s a problem you could instead of a reload send USR2 and
QUIT to the nginx process (http://nginx.org/en/docs/control.html
> "caching reverse proxy" is what nginx is built for.
>
> "rewriting the body content" is not.
Well you can rewrite body with the sub module (
http://nginx.org/en/docs/http/ngx_http_sub_module.html )
The only caveat is that the module doesn't support compression (gzip) and you
need to explic
> Is there a way to set the path of the cookies to /, regardless which GUI is
> used?
Yes you can make nginx to change the cookie
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_path
If the path from the backend app is unknown you can probably use a regex to
match every
> We have a website under heavily development. So we divide the site to 3
> branches stage, demo and main. What our developers want from me is : "every
> request from office ip address to main domain must redirect to stage."
If there is a single IP you can use the if directive
(http://nginx.o
> 1 192.168.1.249 - - [05/May/2019:14:43:28 -0500] "GET /bugzilla/ HTTP/1.0"
> 200 4250 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36
> (KHT
> 2 Execution Time 8579
>
> the dash after 200 4250 is the 'host" I believe it is seeing or defaulting to
> "-" and not http://bug
> We have made all the changed we could in the kernel to help with this but
> still hitting limits.
What changes have you made?
Usually the port limit is reached because of time wait sockets.
If not done already try with:
net.ipv4.ip_local_port_range = 1028 65535
net.ipv4.tcp_tw_reuse = 1
net.
> Yes all of those changes you have mentioned have been made.
Well imo there is nothing else besides to even more decrease the FIN timeout
(in a LAN that shouldn't be an issue (no slow clients)) so the lingering
sockets are closed faster.
Also instead of adding the network adapter(s) on the w
> I love nginx and use it for other applications but maybe its the wrong
> product for this senerio
Does nginx connect to mysql (like you use some kind of embedded module
(perl/lua etc)?) or do you proxy some backend app?
If not then it has no relation to this issue.
> We do not have an issue
Ohh I missed the whole idea that nginx is used as tcp balancer for mysql.
But imo it is still more simple (unless you can't do anything with the DB
server) to balance the remote server rather than split and bind local clients:
upstream backend {
least_conn;
server ip1:3306;
server ip2:3306;
> How can I achive this, any ideas? Thank you.
>
>
> map $remote_addr $is_web_internal {
>202.212.93.190 1;
> default 0;
> }
>
> if ($is_web_internal) {
> return 301 https://new.domain.com.tr$uri ;
> }
Basically the same - you can just replace the map directive to split_clients
> Thanks but I got an error. I can not find the required module.
>
> root@Net:~# nginx -t
> nginx: [emerg] unknown directive "split_clients" in /etc/nginx/nginx.conf:598
> nginx: configuration file /etc/nginx/nginx.conf test failed
Well your nginx is compiled using --without-http_split_clients_mo
> Instead of IP address, if we use FQDN with https, do we have to validate the
> SSL certificate on Proxy_Pass?.
By default the certificate validation is turned off (and nginx just uses the
ssl for traffic encryption).
If needed you can enable it with proxy_ssl_verify on; (
http://nginx.org/e
> hi all, I have a Nginx server,which I want to setup a time-based acl, for
> example,
> during 8am to 17pm, Nginx accept all connections, during 17pm to 8am nextday,
> Nginx deny all connections.
> Different acl may be deployed in different sites.
>
> Is this possible? I looked at the ngx_http_a
> Andreas,
>
> Do you know of any large, high traffic sites that are using HSTS today?
>
> Peter
>
For Chrome (Chromium) you can view the preload HSTS list here:
https://chromium.googlesource.com/chromium/src/net/+/master/http/transport_security_state_static.json
google / twitter / paypal to
> I'm having some issues with getting X-Forwarded-For set consistently for
> upstream proxy requests. The server runs Nginx/OpenResty in front of
> Apache, and has domains hosted behind Cloudflare as well as direct. The ones
> behind Cloudflare show the correct X-Forwarded-For header being > set
> I expect it to fail with a 444, and only have info about the failed subdomain.
The SSL handshake happens before the http status and since the browser doesn't
get a valid certificate it immediately throws an error and ignores the rest.
Unless the users override the error on the browser side (iir
> > Just for testing purposes (if possible) you could either add the IP to
> > both listen directives or remove the ip part from the full-domain
> > server {} block to see if it changes anything.
>
> Hm. That doesn't really make sense to me.
>
> This server has multiple IPs. The hosted server n
> certificate (and also the test 403 response) for nondefined subdomain requests
> and the order of server {} block
Missed the ending of sentence - .. the order of server {} blocks doesn't matter
(in the test case).
rr
___
nginx mailing list
nginx@ngi
> "In versions prior to 0.8.21 this parameter is named simply default. "
>
> Was that a typo? Or is there a new or different usage now ?
Not a typo just nginx being backwards compatible and me using it since 0.5.x or
even earlier (and being lazy).
As far as I remember the directive has been re
> With that config when I try to launch nginx it fails with these errors
>
> Aug 09 11:29:21 myhost nginx[10095]: nginx: [emerg] bind() to [::]:443
> failed (98: Address already in use)
Try to remove the ipv6only=on option it should work just fine without.
Imo the [FE80:...:0001]:443 conf
> I have configured nginx to cache static content, but i cant see any file in
> caching
> folder, also when i'm opening page in DevTool on network tab it show
Unless you have somehow messed up the configuration in the email, something
like:
server {
listen 443 ssl;
server_na
> I tried adding the following line in there in a couple different places but
> all it does is download the php file.
>
> location /blog {
> rewrite ^/blog/([A-Za-z0-9-]+)/?$ /blog-article.php?slug=$1 break;
> }
Try to switch from 'break' to 'last'.
By using 'break' it means that nginx stops
> Thank you. That was indeed the issue. Now I can see the individual blog
> entries at /blog/slug-of-blog
>
> but /blog and /blog/ urls are both throwing a 404.
>
> Is that an easy fix?
>
>> rewrite ^/blog/([A-Za-z0-9-]+)/?$ /blog-article.php?slug=$1 break;
You have to tweak the regex - cur
> Is this expected behaviour? Could there be another way to do this?
'if' (the rewrite module) is executed in early stages when $sent_* variables
are not available that’s why the regex doesn't match.
What you could do is use 'map' instead:
(http://nginx.org/en/docs/http/ngx_http_map_module.html
> The problem is comming when I try to test both Django sites with ssllabs.com
>
> >Certificate #2: RSA 2048 bits (SHA256withRSA) No SNI
> The error what I see is "Alternative nameswpexample.org
> www.wpexample.org
> MISMATCH"
It is normal for clients which don't support SNI (server name indi
> When this is all done, and I import the p12 client certificate on my Windows
> PCs (tested 2) Chrome and Firefox show me the "400 Bad Request\n No required
> SSL certificate was sent". The very strange thing is IE11 on one of the two
> PCs, actually prompts me to use my newly-installed cert t
> I will search for this. Not sure how to add this info to my logs, or
> whether it logs failures too?
$ssl_client_verify - contains the verification status
You have to define a custom log_format
(http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format )
For example:
log_format cli
> Here is the situation posted in DO community.
> https://www.digitalocean.com/community/questions/enabling-gzip-compression-guidance-needed
> Thanks for any help.
Well you are testing in a wrong way. First of all:
curl -H "Accept-Encoding: gzip" -I http://localhost/test.jpg
HTTP/1.1 301 Moved P
> I am trying to reduce transfer size in my website. Although i apparently have
> enabled Gzip compression, i does not show as enabled in GTmetrix testing.
For testing purposes just putting:
gzip on;
gzip_typestext/html text/plain text/xml text/css application/javascript
application/jso
> The problem is, whatever URL I put in the browser, it redirects to
> https://trisect.uk ___
> server_name trisect.uk *.trisect.uk;
> return 301 https://$server_name$request_uri; }
For that I don't think you can use $server_name here because it will al
> proxy_store seems to be a much simpler alternative to “cache" pseudo-static
> resources.
>
> Is there anything non-obvious that speaks agains the use of proxy_store?
Depends on how you look at "much simpler".
proxy_store doesn't have a cache manager so space limitation/cleaning is up to
you.
> Hello,
>
> is there a way to check if a requested resource is in the cache?
>
> For example, “if” has the option “-f”, which could be used to check if a
> static
> file is present.
>
> Is there something similar for a cached resource?
Depending on what you want to achieve you could check $up
> I struggled with using the alias directive because I (incorrectly) assumed
> that it was relative to root since all other parts of my nginx configs are.
> This is not mentioned in the documentation, it'd be nice to see it there.
Well it's not directly worded but you can (should) see from the
>
> if ($args ~ "^p=(\d+)") {
> set $page $1;
> set $args "";
> rewrite ^.*$ /p/$page last;
> break;
> }
>
> I knew there'd be a simpler way and I due to the time
> While the AJAX request does not know anything about the NGINX Proxy, they
> does not know anything about the “webui” path. So I need to find a solution
> to manipulate these javascript code:
If the javascript files are proxied the same way (from origin server) as the
application you can use t
> Hi!,
> I do not understand what should I modify.
The problem is your backend application (I assume node app) which listens on
the 8080 port. While nginx is doing everything right the app responds and
constructs the urls using internal ip and/or 'localhost'.
Depending on what the app uses for
> Now while accessing my VM ip http://x.y.z.a, I am getting "403 Forbidden"
> error in the browser. However gitlab still working. How to get both the sites
> working listening on port 80 but with different context of location?
First of all you should check the error log to see why the 403 is retur
> From the hosts outside i've no connection problem, but from inside they are
> unable to connect to the port. No firewall are enable on Nginx LB( Centos 7
> machine by the way) and Selinux is disabled.
By "from inside" you mean other hosts in LAN or the same centos machine?
If first then it's
> Is there a way to prevent Arbitrary HTTP Host header in Nginx? Penetration
> test has reported accepting arbitrary host headers. Thanks in Advance and I
> look forward to hearing from you.
You can always define "catch all" server block with:
server {
listen 80 default_server;
s
> I have added the below server block in /etc/nginx/nginx.conf
> (https://paste.centos.org/view/raw/d5e90b98)
>
> server {
> listen 80;
>server_name _;
>return 444;
> }
>
> When i try to run the below curl call, I am still receiving 200 OK response.
> #curl --verbose --h
> I have added the below server block https://paste.centos.org/view/0c6f3195
>
> It is still not working. I look forward to hearing from you and your help is
> highly appreciated. Thanks in Advance.
If you don't use the default_server for the catch all server{} block then you
should place it a
> So either place it as first or add listen 443 default_server;
By first I mean the "catch all" server { server_name _; .. } block.
rr
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
> I did follow your steps. My nginx.conf file is
> https://paste.centos.org/view/ae22889e when I run the curl call, I am still
> receiving HTTP 200 OK response instead of HTTP 444 (No Response) as per the
> below output
If you've just called config reload then most likely your nginx is still us
> E: The repository 'http://nginx.org/packages/ubuntu tricia Release' does not
> have a Release file.
> N: Updating from such a repository can't be done securely, and is therefore
> disabled by default.
> -
> Are there any other instructions available to get Nginx 1.17 downloaded?
You should p
> Where is the Bionic repo?
>
> If you are referring to the default repository for all things Linux Mint,
> there
> was only Nginx 1.14.
I mean the nginx bionic repo (here you can see the available Ubuntu versions
http://nginx.org/packages/mainline/ubuntu/dists/ )
But it seems you have alread
> The agents in my local network(192.x.x.x)) instead, are able to authenticate
> over port 1515 TCP, but not to send logs over 1514 UDP. The agents log said
> that they are unable to connect over that port.
>
> If I temporally change the port 1514 UDP to 1514 TCP in my HIDS nodes, and
> make the s
> but my agents are still unable to send logs over port 1514 UDP
Well at least the nginx setup seems in working order.
Now do you see any more detailed messages on the agents (like extended ip/port
info / connection error)?
Also you could inspect the network traffic to see if the centos box re
> I get that the NGINX listen statement works on an individual port basis, so
> the equivalent of what's below in NGINX would at the very least require 300
> listen statements.
You can listen on a port range (see below).
> FYI I've tried referencing my own declared variables from within the u
> Hi.
> Here the result from tcpdump:
> from inside my network
> 192.168.1.10.60221 > 192.168.1.3.fujitsu-dtcns: UDP, length 107
> 192.168.1.3.fujitsu-dtcns > 192.168.1.10.60221: UDP, length 85
>
> From all agenst fro outside my network:
> any.public.ip.address.56916 > 151.1.210.45.fujitsu-dtcns:
> The user MUST BE ABLE to download the file from the article pages when
> LOGGED.
> If the user is NOT LOGGED, he cannot download the file, therefore even
> recovering the url, he must receive an error or any other type of block.
It's rather difficult to achieve that only with a webserver (as typ
> After using 1.1.1e, see also the commit where an explicit entry has been
> added.
> nginx just reports back what openssl passes, if this was unexpected (none
> critical) nginx needs to be patched, if not this openssl workaround (10880)
> needs to be changed.
Any comment on this from any nginx de
> The Nginx built with OpenSSL 1.1.1d does not generate the error logs. I don't
> know how I can fix this problem.
> Belows are my Nginx build configuration and nginx.conf.
I'm using 1.1.1e bit with reverted EOF patch (so far haven't seen any issues
and it seems they are going to revert it anyway
> What I need is a cache of data that is aware that the validity of its data is
> dependent on who - foo or bar - is retrieving it. In practice, this means that
> requests from both foo and bar may be responded to with cache data from
> the other's previous request, but most likely the cache will b
> Is there any way to tie the 'inactive' time to the cache-control header
> expiration time so that pages that are cached in a certain time-window are
> always kept and not deleted until after the header expiration time?
You can just set the inactive time longer than your possible maximum expire
> Can I compile nginx on Ubuntu 16.04 and reuse it on other deployments? Or do
> I need to compile every time ? Please advise.
As far as the hosts have all the shared libraries like openssl/pcre etc (you
can check with 'ldd /path/to/nginx') there is no need to compile every time and
you can jus
> server {
> location / {
> root
> /home/marco/webMatters/vueMatters/ggc/src/components/auth/weights;
>}
> }
Since it's under /home most likely nginx has no access to the directory.
Check the user under which nginx is running (probably nobody) and try to check
if you can read the fil
> Subject: Can someone explain me why "curl: (7) Failed to connect to
> 127.0.0.1 port 2000: Connection refused" ?
>
> Hi!,
>
> I do not understand why it says "curl: (7) Failed to connect to 127.0.0.1 port
> 2000: Connection refused" :
> curl -X POST -F 'first_name=pinco' -F 'last_name=pallo' -F
> it's dependent on openssh version and installed one is 1.0.1t
On openssl.
> which seem to support TLS1.2, but "nmap --script ssl-enum-ciphers -p 443
> sitename" says only SSLv3 and TLS1.0 are supported. So is there anything I
> can to to make nginx 0.7.65 recognize TLS1.2 and use it?
>
> Yeah
> But when A is not available, it should send request to B.
> When A come back, it should send requests to A.
You can add 'backup' for the B server and it will be used only when all others
(A) are down:
https://nginx.org/en/docs/http/ngx_http_upstream_module.html#server
rr
_
> the links you’ve sent me I have tried to log in with my usual email and and
> password I use and it’s not correct tried to click remind then it doesn’t
> work
You can just send email to nginx-requ...@nginx.org with subject 'unsubscribe'
(without quotes).
It should remove you from list (it
> I am running nginx version: nginx/1.16.1 on CentOS Linux release 7.8.2003
> (Core). When I hit https://marketplace.mydomain.com it works perfectly fine
> whereas when I hit http://marketplace.mydomain.com
> (port 80) does not get redirected to https://marketplace.mydomain.com (port
> 443). I
> return 301 return 301 https://$server_name$request_uri;
Obviously a typo just a single return 301.
rr
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
> I am looking for APIs on Nginx Opensource. To monitor, get status and
> dynamic configuration of nginx.conf files.
>
> Does the opensource version has it, please confirm?
For the os version there is stub status module
http://nginx.org/en/docs/http/ngx_http_stub_status_module.html
There are se
> Can Unit be used as a reverse proxy server like what we do with Nginx?
It can.
> I want to update my Nginx reverse proxy server dynamically (&
> automatically) without any downtime, whenever the underlying services
> scale up & down automatically.
In general nginx reloads configuration gracef
I'm not very into Java but you might get more details if you add
-Djavax.net.debug=SSL,handshake or -Djavax.net.debug=all
The current error is not very explanatory (at least to me) and from nginx side
the client just closes connection.
You could test the nginx side with cipherscan
https://git
> I'm going over some Web Server STIGs (referenced here:
> https://www.stigviewer.com/stig/web_server_security_requirements_guide
> /) to make sure my NGINX web server is configured to comply with those
> security requirements. One of the requirements is that "The web server must
> initiate session
> I have the following server in NGINX and it works fine. But, I am wondering is
> it possible to add text to a response from a remote URL where hosts my
> before_body.txt and after_body.txt? Is there any way to tackle this? Is it
> possible at all?
According to documentation
(http://nginx.org/en
> > Now instead you want the content of the url
> > http://externalserver.com/before_body.txt?
>
> Yes, that's right.
Can you actually open the file on the external server -
http://externalserver.com/src/before_body.txt and does it have the content you
expect (without redirects)?
Note that si
> --
> #Proxy server (Server1)
>
> # threedaystubble.com server
> server {
> listen 80;
> server_name www.threedaystubble.com threedaystubble.com;
> location / {
> proxy_pass http://192.168.3.5:80;
> }
> }
In this co
> I need a HTTP proxy that can handle requests for a single upstream server,
> but also log request headers, raw request body, response headers and raw
> response body for each request. Preferably this should be logged to a
> separate daily logfile (with date-stamped filename), with timestamps, but
It is a bit unclear if you want only a single rewrite or are there multiple
different directory mappings/redirects.
> I tried a couple of ideas, but they didn't work, I thought this location
> directive
> inside a server block was best, but it didn't work.
>
> location = /e {
>return 31
> I was wrong...
>
> >This seems to work:
> >>rewrite ^/e/(.*) /$1 permanent;
>
> It only works for the first level...
> 'threedaystubble.com/Gallery.html' works but other links from that page that
> got deeper into the file structure do not!
What do you mean by "got deeper" can you give a sampl
> Please bear with me...
> It seems that I'm getting different results than I described earlier...
>
> In fact it is now working for the most part...
> The errors are limited to certain files in Chrome on the Mac, but not in
> Safari
> or Firefox.
You should clean cache (or set to never cache) f
> I am curios at what point the the cache exceeds the comfort zone of the
> design.
In my opinion it depends more on the aspect how important is your cache / how
quickly can you replace and repopulate it (how fast or loaded are your
backends) / can your service work without it - as in if you ha
> As part of the security audit, I have set server_tokens off; in
> /etc/nginx/nginx.conf. Is there a way to hide Server: nginx, X-Powered-By and
> X-Generator?
>
> To hide the below HTTP headers
>
> Server: nginx
> X-Powered-By: PHP/7.2.34
> X-Generator: Drupal 8 (https://www.drupal.org)
Afa
> Recently noted that when proxying Hasura for the https support reduces the
> speed to 7-50x times! More information including tcpdump available in
> https://github.com/hasura/graphql-engine/discussions/6154
Looking at the github discussion - you are comparing http vs https.
Since you are not
> Keep alive works for other REST services, but not working for Hasura.
> (Keep-Alive requests:0 Vs Keep-Alive requests:200 for other
> services). Is Keep-Alive anything to do with the response headers of Hasura
> or its POST request?
It could be that the service/backend doesn't support
> Is there a way to enable redirect from port 80 to 443 for both
> /etc/nginx/conf.d/onetest.conf and /etc/nginx/nginx.conf files. Any help
> will be highly appreciated.
You can have only one default_server per listen port.
It will be the used if a client makes a request not matching any hostna
> I have a question about nginx internals. How does nginx ensure high
> throughput? I understand that nginx uses many parallel connections by using
> epoll. But what about processors? Is connection handling spread amongst
> multiple processors to handle any processing bottleneck?
If necessary y
1 - 100 of 288 matches
Mail list logo