On Mon, 3 May 2021 at 18:47, Kaushal Shriyan wrote:
>
> Hi,
>
> Is there a way to verify if the below cipher suites set are accurate
> and are free from any vulnerabilities?
I suggest you use tools like the public Qualys ssltest:
https://www.ssllabs.com/ssltest/
or testssl:
https://github.com/dr
On Sun, 27 Mar 2022 at 15:58, Sergey A. Osokin wrote:
>
> Hi,
>
> On Sun, Mar 27, 2022 at 02:04:10AM -0400, sukeerthiadiga wrote:
> > The Mainline version of Nginx i.e 1.12.6 has the OpenSSL version 1.1.1m and
> > it is vulnerable.
>
> That's a bit far from true. NGINX, as many other products, de
Hello,
the *client* you are using to test this is just as important. Adjust
CipherString in /etc/ssl/openssl.cnf or the client parameters (-cipher
"DEFAULT:@SECLEVEL=0") too.
~# grep SEC /etc/ssl/openssl.cnf
CipherString = DEFAULT:@SECLEVEL=2
~#
~# openssl s_client -connect www.google.com:443 -t
On Mon, 14 Nov 2022 at 17:31, James Read wrote:
>
> I have configured SSL on a number of subdomains including
> https://us.wottot.com
>
> On my PC I can view the resulting web page without any problems so this leads
> me to believe the SSL configuration is correct.
Wrong, the intermediate certi
On Mon, 14 Nov 2022 at 21:00, James Read wrote:
>
>
>
> On Mon, Nov 14, 2022 at 5:58 PM Lukas Tribus wrote:
>>
>> On Mon, 14 Nov 2022 at 17:31, James Read wrote:
>> >
>> > I have configured SSL on a number of subdomains including
>> > http
On Mon, 14 Nov 2022 at 21:09, Lukas Tribus wrote:
>
> On Mon, 14 Nov 2022 at 21:00, James Read wrote:
> >
> >
> >
> > On Mon, Nov 14, 2022 at 5:58 PM Lukas Tribus wrote:
> >>
> >> On Mon, 14 Nov 2022 at 17:31, James Read wrote:
> >>
On Mon, 14 Nov 2022 at 21:33, James Read wrote:
>> For nginx you need the base64 encoding, which is:
>>
>> https://ssl-ccp.secureserver.net/repository/sfig2.crt.pem
>>
>
> I tried adding that certificate but sudo nginx -t now returns the following
> error:
>
> nginx: [emerg] SSL_CTX_use_PrivateKe
On Mon, 14 Nov 2022 at 22:56, James Read wrote:
>> So the file needs to contain first your certificate and then the
>> intermediate one.
>
>
> OK. Thanks. I rearranged the file and deleted some certificates. Now sslabs
> is reporting no chain issues for Certificate #1: RSA 2048 bits (SHA256withRS
On Friday, 3 February 2023, Saint Michael wrote:
> I have a reverse proxy but the newspaper that I am proxying is
> protected by cloudflare, and the block me immediately, even if I use a
> different IP. So somehow they know how to identify my reverse-proxy.
> How is my request different than a r
> Any solution other than switching to
> https://launchpad.net/~nginx/+archive/ubuntu/development (wich scares the
> skull out of me, since this is a production server)?
Use nginx provided binaries if compiling from source is not an option:
http://nginx.org/en/linux_packages.html#mainline
___
> I was anticipating such a compatibility problem to be fixed in feature stable
> but so far it’s looking like we will have to bite the bullet and move to
> mainline.
> Would I be correct here? It seems for our case at least, feature stable HTTP2
> is not stable for production use at this time.
Hi,
> for a test environment I successfully set up an nginx webserver (1.11.2)
> with HTTP/2.
>
> But for further tests I need to decrypt traffic with wireshark using the
> servers private key.
The way to do this is to use keyfile from your browser, so wireshark is aware
of the symmetric key
> I use nginx 1.11.3 with nginx upload module.The problem is that Nginx upload
> module don't support HTTP/2 and thus when you upload you get 500 Internal
> Error.
Use a dedicated subdomain, like upload.mywebsite.com.
> For now i am trying to use a separate server block to disable http2 just
>
Hello,
On 08/16/16 07:37, Lukas Tribus wrote:
>> I use nginx 1.11.3 with nginx upload module.The problem is that Nginx upload
>> module don't support HTTP/2 and thus when you upload you get 500 Internal
>> Error.
>
>> Use a dedicated subdomain, like upload.myweb
> This is a false statement, nginx doesn't do any restriction
> regarding HTTP/2 and TLS ciphers configuration.
Good thing, likely the restriction is on the browser side and Apache was not
configured with the same exact cipher suite.
> The list you are mentioning and which is directly linked i
> @Lukas do you mean something like this
Yes, that's what I mean.
Lukas
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
I have a question: secure_link is correctly blocking those requests so its not
generating any traffic.
Why does it bother you then, if it is already blocked?
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
> Yes the links are generated correctly but because their plugin does not
> currently contain the regex to understand ampersands in HTML. If they was to
> fix their plugin and use regex to replace the ampersand & with & then
> the link would work correctly.
>
> It bothers me because the fix is as
> I'll do it but I guess the test will no longer be so relevant because I want
> to simulate different users.
Real user/browser DO keep-alive. Sendings thousands of requests per
second in dedicated TLS session is not what you would see in real life
from real users.
> Anyway, the question is in
> I agree but I think that separate/different simultaneous users won't use a
> common connection so for this very specific scenario keep-alive won't
> matter. Of course for every individual user keep-alive will matter but this
> aspect for the moment I won't to ignore in testing.
It does matter, a
> 4 threads and 4 CPU (both for apache and nginx) with 100% CPU load on test
> So, what's the answer now about the http/https (4600/550) ratio for the
> specific case I presented?
It should perform the same as Apache in this case.
___
nginx mailing list
> It seems that search engines are probing https: even for sites that
> don't offer it
Which is fine.
> just because it's available for others, with the end
> result that pages are being attributed to the wrong site.
Sounds like an assumption. Any real life experience and
evidence backing t
> > Any real life experience and evidence backing this?
> yes
Care to elaborate?
> Not sure why you're doubting me here Lukas. Yes, this is a problem. No
> I'm not making it up.
We know that crawlers like Googlebot try HTTPS as well, even if there is no
https link towards the website. That is
> > Does it cause warnings in the webmaster tools? Who cares?
> > Does it affect your ranking? I doubt it.
> > Does it index pages or error pages from the default website and assign to
> > your website? I doubt that even more.
>
> Does it upset my customer? YES.
>
> That's all the justification I
> Why should I? I clearly defined the problem/misconfiguration. I don't
> really see the need to justify why I want to fix it.
To help others, myself included to comprehend a possible problem in similar
configurations and learn more about it. After all, this is a community.
> Well, you told me
> Did anyone have a solution for this? I also have many of these errors logged
> because I am using Google Container Engine that does not support IPv6.
Try ´man gai.conf´ to configure getaddrinfo behavior [1].
You could also try forcing a ipv6=no nginx resolver by using a variable:
set $blablas
> But, just curios, why IPv6 upstream can't serve the traffic?
Because if you configure IPv6 on your system but don't have
IPv6 connectivity, it will try and fail.
> If I access the IP Address using browser, it's normal.
Because the browser probably recognizes the broken
configuration and work
> Each time i change the key file with a new key, is it necessary to run a
> "systemctl reload nginx" ? or do Something else.
Yes, afaik nginx requires a reload.
Haproxy can replace TLS tickets via admin socket [1] so a reload/restart is
not required, I'm not aware of similar nginx functionalitie
Hello list,
in Ticket #196 [1], Maxim Dounin suggested that spaces in URI's could be
disallowed by default.
As far as I can tell, current code still does not "disallow" those requests
(not by default and not via specific configuration either), is that correct?
Could this be improved, as per t
> I think the main question here: is it ok to just drop support for
> spaces, or we have to introduce some option to preserve the old
> behaviour.
My opinion: I think we will need the configuration knob, so there is time
to fix the problem, as a client bug is not always immediatly fixable.
Eith
> Please watch the clip at https://youtu.be/QpLtBftqM04?t=34m51s until
> about 36m12s where Simone Bordet, a Jetty developer, claims that
> HA Proxy is a better proxy solution than nginx because it talks
> HTTP/2 to the Upstream.
This statement is misleading.
As of now, haproxy does not support H
Hello,
> In nginx there is no native support for bcrypt passwords as
> produced by Apache's htpasswd. On the other hand, nginx can use
> all password schemes supported by crypt(3) on your OS. Many
> operating systems do support bcrypt-encrypted passwords in
> crypt(3), and if Apache's varia
Hello!
> One of the bcrypt scheme main properties is that it allows to
> control number of rounds, and thus control hashing speed. With
> low number of rounds it is reasonably fast. For example, with 2^5
> rounds (default used by htpasswd) it takes about 4 milliseconds
> here on a test box:
Hello!
> This issue often happens when a cipher is missing in your cipher list and
> Chrome tries to use another cipher forbidden in the HTTP/2 spec.
Wrong. In that case, Chrome would return:
ERR_SPDY_INADEQUATE_TRANSPORT_SECURITY
which is different than ERR_SPDY_PROTOCOL_ERROR.
Also note tha
Hello,
> Also, has anyone tried using nginx for DNS load balancing in production?
I would not recommend using nginx to load-balance DNS traffic at all.
nginx is just a dumb UDP proxy and I doubt it performs well enough
in a DNS setup.
dnsdist [1] is written with this purpose in mind and used in
Hello,
> thanks for your comment Roman, do you know how these guys did it?
> https://www.maxcdn.com/one/tutorial/pseudo-streaming-maxcdn/
Why is pseudo streaming still a thing?
With HTML5 video players, everything is handled with RFC compliant
range requests and HTML5 video should be supported
Hello,
starting with nginx 1.11.11 you can use worker_shutdown_timeout
to limit the amount of time workers stall the shutdown.
However, you will still have increased memory usage.
You will always have increased memory usage while soft reloading.
If you cannot accept that, then you have to stop
> After some researching i've decided to go with individual nginx
> nodes for now . If we encounter too much request to our
> upstream, i'm gonna set up the multi layer architecture you
> mentioned probably
While multi layers of nginx cache may help with bandwidth, it
wastes huge amount of storage
Hello,
> I'm currently testing nginx 1.13.6 x64 on my development machine, which is
There is no 1.13.6.
> I've tested 5a3ab1b5804b, 46ddff109e72, and 924b6ef942bf and they have the
> same problem.
Ah so you are running directly from the development tree. In that case, I
suggest
to bisect it
.mov is not .mp4, they are completely different containers.
> To: nginx@nginx.org
> Subject: Nginx 1.2.7 / h.264 / flowplayer
> From: nginx-fo...@nginx.us
> Date: Fri, 15 Mar 2013 13:30:50 -0400
>
> I'm running Nginx v1.2.7 from the Nginx repo, with mp4
Very simple: features.
haproxy has a huge list of features for reverse proxying that nginx
hasn't, varnish has the same for caching.
If you can do everything with nginx, go for it. But for more complex
scenarios and if you really need the highest possible performance,
you probably wanna stick to
> Why would you doubt that? Of course, my machines may be bigger than the
> norm...
Because nginx doesn't do tcp splicing. Is my assumption wrong; are you able to
forward 20Gbps with nginx? Then yes, probably you have huge hardware, which
isn't
necessary with haproxy.
Upgrade to >= squid 3.2, which seems to support HTTP/1.1 and you
will have your persistent connections with squid:
http://www.squid-cache.org/mail-archive/squid-users/201108/0061.html
http://wiki.squid-cache.org/Squid-3.2
___
Thats may be a dump question: but why do you use different host names
in the first place? Is it a real business requirement to have a host
name per domain? Simply using a single host name for all domains would
solve all you issues here.
If this really is a business requirement for you (maybe the s
> there is no nginx.core file found in /home/core/ , thats the path specified
> in nginx.conf
Make sure:
- /home/core/ is writable (chmod a+w)
- ulimit is configured correctly
- fs.suid_dumpable is ok?
You can read more about it here:
http://www.cyberciti.biz/tips/linux-core-dumps.html
Everything you need to know:
http://nginx.org/en/docs/http/ngx_http_core_module.html#listen
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hi!
> We are running a video stream website and using nginx(1.2.1) for
> streaming.
Get nginx 1.2.8. There are at least 2 bugfixes regarding mp4
streaming between 1.2.1 and the latest stable version.
___
ngi
Hi!
> Cfr. http://www.securityfocus.com/archive/1/526439/30/0/threaded
>
> Is 1.4.x release affected?
I guess. Please see the "recent nginx security issue announce" thread.
Cheers,
Lukas
___
nginx mailing list
ng
> Although [::]:80 ipv6only=off; does work as advertized (including for
> localhost sockets), [::1]:80 ipv6only=off; fails to respond to v4
> connections.
Which is expected, since ::1 is an ipv6 address.
Lukas
___
n
Hi!
> how to disable nginx internal dns cache?
Use an IP address instead of a hostname in the proxy_pass variable.
> both of them doesnt work.
Can you elaborate what "it doesn't work" mean?
Lukas
___
nginx m
Hi Jim,
> Everything else sees conenctions to anything in 127.0.0.0/8
Not sure what you mean by everything else, but I don't think
thats the case.
See this example:
> lukas@ubuntuvm:~$ grep Listen /etc/ssh/sshd_config
> ListenAddress ::1
> ListenAddress 10.0.0.55
> lukas@ubuntuvm:~$ sudo netst
Even when explicitly setting the socket option IPV6_V6ONLY to 0
(man 7 ipv6) - and thus ignoring ´cat /proc/sys/net/ipv6/bindv6only´
this doesn't work.
> While looking into this, I found that, when given ::1, nc(1)
> explicitly listens to both ::1 and :::127.0.0.1.
The behavior here is exac
Hi,
nginx does its job correctly:
$ curl -I http://nginx.org/404
$ HTTP/1.1 404 Not Found
$ [...]
I guess the irregular response comes from your mod_perl backend? Did you
check that out?
Capturing nginx's debug output of such a request will probably help.
http://nginx.org/en/docs/deb
Hi,
> Nginx responses show (regardless of how I try to connect):
>
> Scheme: http
> Name: my.example.com
> Port: 80
> [...]
> proxy_set_header Host $http_host;
> proxy_set_header X-Forwarded-By $server_addr:$server_port;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> proxy_set_h
Perhaps munin connects over ipv6? Can you allow ::1?
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hi Richard,
> Given that the mp4 module works remarkably well on the whole, where
> should I look to find the cause of these rare errors? (and what do they
> actually mean :))
>
> nginx version: nginx/1.2.9
I would suggest you upgrade nginx. Between nginx/1.2.9 and a recent releases
there are 4
Hi!
> Date: Fri, 28 Jun 2013 17:11:01 +0100
> From: rkears...@blueyonder.co.uk
> To: nginx@nginx.org
> Subject: mp4 atom too large
>
> Hi
>
> I use ngx_http_mp4_module quite heavily, and very occasionally I see
> this error for a few files:
>
> mp4 atom too
Hi Richard,
> I already checked there, I'm getting a different error ("mp4 atom too
> large" != "mp4 moov atom is too large")
> My error message seems to have been added in this patch
> http://nginx.org/download/patch.2012.mp4.txt
> In any case, the example given there gives a reasonable example,
Hi!
> Missing NPN Extension in SSL/TLS Handshake
Did you compile openssl on your own?
Can you post the output of "openssl version -a"?
Sounds to me as if OpenSSL was build without TLS extensions.
Thanks,
Lukas
_
Can you run ldd against the nginx executable?
Lukas
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hi Dejan,
> If I use character reference in html file to represent a character and
> web server sends the file on browser request, how the browser will
> decode the character reference?
> My Nginx web server is configured to not send character encoding in the
> header I have set character encodin
Hi,
> % Total % Received % Xferd Average Speed Time Time Time
>Current
> Dload Upload Total Spent Left Speed
> 100 1196k 0 1196k 0 343 2047 0 --:--:-- 0:09:58 --:--:-- 1888
> curl: (56) Recv failure: Connection reset by p
Hi!
> If this were the root cause, wouldn't the cURL call fail in the way way,
> regardless of the CURLOPT_SSL_VERIFYPEER value? In other words, it
> doesn't seem like changing this cURL option would change the number of
> backend processes required to handle the request(s). But I could be wrong.
Hi!
> *) Bugfix: OpenSSL 1.0.1f compatibility.
> Thanks to Piotr Sikora.
Since SSL_OP_MSIE_SSLV2_RSA_PADDING is more than obsolete now, shouldn't
we remove it completely instead of just ifdef'ing it? At least in the
1.5 branch?
Thanks!
Lukas
Hi!
> What you have shown looks well-formed to me, but doesn't look as useful
> as you want. (They are different things. If it is a well-formed http
> 429 response, then it is the client's job to know what that means. The
> reason-phrase and the http body content are optional enhancements that
>
Hi!
> I'm facing a small problem with NGINX; The workers are segfaulting since
> 11:20 this morning.
> [...]
> I can provide some cores, but I can't attach them here. My setup was running
> fine till today (which has some coincidence with a new webservice
> deployed).
>
> Please could you provide
Hi!
> I'm facing a small problem with gdb and separate debuginfo's. Do you build
> with the '-g' compiler option?
Probably not. Please check with
file /usr/sbin/nginx
Does the repository contain a special debug build like nginx-debug or
something? Could you install it?
Whoever maintains the
Hi!
> The patch as in the ticket is wrong, it only hides the real
> problem. Proper patch to solve the problem is to be coded.
>
> As the problem can be easily resolved by using symmetrical session
> cache configuration (better yet, using a single session cache at
> http level), it's not a high
Hi Adam,
> FYI:
> http://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/
>
> We started with a ~1800ms overhead for our TLS connection (nearly 5
> extra RTTs); eliminated the extra certificate roundtrip after a nginx
> upgrade; cut another RTT by forcing a smaller record
Hi,
> On 17 December 2013 08:46, Lukas Tribus wrote:
>> Hi Adam,
>>
>> Thanks, this is very helpful. Are you trying to upstream the record size
>> patch?
>>
>> What I don't get from your patch, it seems like you are hardcoding the
>> buffer t
Hi!
>>> What I don't get from your patch, it seems like you are hardcoding the
>>> buffer to 16384 bytes during handshake (line 570) and only later use a
>>> 1400 byte buffer (via NGX_SSL_BUFSIZE).
>>>
>>> Am I misunderstanding the patch/code?
>
> It may well be the case that I'm misunderstanding
Hi,
> Since iOS7 supports TCP Multipath now, I think more and more devices
> will start support it.
Not if the servers don't support it.
Apple pushed for a specific reason:
To avoid having a broken TCP session when the IP address of the handheld
changes, which would interrupt Apple's Siri.
But
Hi,
> It does not look like 1.0.1f changed the default behavior of
> ENGINE_rdrand (coderman's been following it).
Yes it did, rdrand is no longer enabled by default. Here [1] is
the backport in the OpenSSL_1_0_1-stable head [2].
At least Debian [3] and Ubuntu backported this as well.
Regard
Hi,
> My current values in my nginx configuration for ssl_protocols/ciphers
> what i use is this:
>
> ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
> ssl_ciphers RC4:HIGH:!aNULL:!MD5;
> ssl_prefer_server_ciphers on;
>
> What are todays recommendations for ssl_ciphers option for supporting
> all curr
Hi,
> From this thread on the mailing list
> http://forum.nginx.org/read.php?11,96472,214266 , it appears that nginx
> does not support decompressing HTTP request from a client. The thread
> however is 2 years old and am wondering if there have been any changes.
> I have not found any though
>
>
>> From this thread on the mailing list
>> http://forum.nginx.org/read.php?11,96472,214266 , it appears that nginx
>> does not support decompressing HTTP request from a client. The thread
>> however is 2 years old and am wondering if there have been any changes.
>> I have not found any though i
Hi,
>> Yes I agree. The connection to the upstream server uses the nginx server
>> certificates specified by $ssl_certificate(_key).
>
> It looks like you didn't understand my answer. Again: connections
> to upstream servers don't use any client certificates. That is,
> no certificates are used b
Hi,
> I'll rephrase the question. I'm interested in server certificates (not
> client). The ssl_certificate_key file is used as a private key for the
> server to decrypt ssl connections for clients. I'm looking to configure
> another key for encrypting ssl connections from niginx server to upstre
> I am using client certificates on nginx side to connect to upstream https.
> Issues is when I turn on client verification on upstream server, nginx
> doesn't provide the client certificates.
>
> Any ideas why?
Please read Maxim's responses.
___
Hi,
> I want to do a tcp to tls proxy. we need to communicate to apple server
> via tls (tcp over ssl). our server does not have internet access so we
> need to use a proxy server that has internet access which can
>
> * either accept the tcp communication and do a tls communication with
> apns.
Hi Maxim,
> You've changed SSL session timeout from 10 minutes to 24 hours,
> and this basically means that sessions will use 144 times more
> space in session cache. On the other hand, cache size wasn't
> changed - so you've run out of space in the cache configured. If
> there is no space in a c
Hi,
> I am trying to stop my customers that are trying to connect from an
> insecure web browser (my goal is to use only TLS1.2). I have read
> the documentation and I am able to set correct ssl ciphers and
> protocols on the server side, but I am interested in serving custom
> page when they are
Hi,
> I have nginx set as a reverse proxy for a mail server and it throws
> this 502 (invalid header) error while trying to fetch a file with a
> space in the filename. Any clues on where is this bug in the nginx code?
Prior to jumping to conlusion about bugs in nginx, how does this response
hea
Hi,
> What debugs should i enable & how to see these response headers ? I do
> see this error though.
Just use curl for example and request it directly from your backend:
curl -k -I
"https://127.0.1.1:8443/service/home/~/?auth=co&loc=en_GB&id=259&part=3";
So you can check the actual respons
Hi Kunal,
> I used the web browser but didn't see this Content-disposition header
> in the response. Only saw these response headers.
We need to see the Content-disposition, everything else makes no sense.
Are you trying against the nginx frontend or your backend? If it is nginx
you're connecti
Hi,
> I downloaded another file and the Content-Disposition header lists the
> filename with space under quotes correctly "zcs error.docx" thereby
> proving that its nginx which is not parsing it correctly. Correct me if
> i am wrong.
Is this specific response going through nginx or directly fro
FYI, nginx has not problems passing filenames with spaces along:
# curl -I http://direct-apache/content-disposition-header.php
HTTP/1.1 200 OK
Date: Mon, 24 Mar 2014 19:40:22 GMT
Server: Apache/2.4.2 (Win32) OpenSSL/1.0.1c PHP/5.4.4
X-Powered-By: PHP/5.4.4
Cache-Control: no-store, no-cache
Connec
> hmm..thanks Lukas.
> So its my backend server only which is causing this issue.
>From the information provided in this thread, I can't tell.
We would need the exact response header that makes nginx return
the 502 response plus detailed informations about your setup (output
of nginx -v and your
> Never mind there's nothing wrong with nginx here.
> It was one of the response headers sent by an upstream server
> (mainly Content-Description: 2013923 10H56M56S633_PV.doc�) including
> this non-ascii char '?' which the nginx didn't like and hence flagged
> it saying that it received an i
Hi,
> Sadly not quite. The change in IP means that the eCommerce part of the
> site must be served through https:, but there seems to be a terrible lag
> - even though TTL has been set to 5 minutes for weeks - for customers in
> picking the change up.
>
> This means that the old IP address needs
Hi,
> Mainly because I can't seem to get it to work - nginx, apache or
> iptables.
>
> I'm sure someine can come forward with technical reasons why...
In this thread you asked about how this could be done, you didn't say
that you already tried something and that it didn't work.
So you are hopin
Hi,
> Mar 27 08:05:44 DNTX014 abrt[10150]: Saved core dump of pid 5803
> (/usr/local/sbin/nginx) to
> /var/spool/abrt/ccpp-2014-03-27-12:05:44-5803 (60538880 bytes)
>
>
> Could someone tell me what is that ?
Its a crash.
Provide output of "/usr/local/sbin/nginx -V" and check:
http://ngin
> nginx -V
> nginx version: nginx/1.2.8
Thats not the complete output of -V (capital letter). Either you
truncated the output by yourself or you sent us the output of -v.
Please send the complete output of "nginx -V", where the V is a
capital letter.
Also compile nginx without third party mod
> Thats not the complete output of -V (capital letter). Either you
> truncated the output by yourself or you sent us the output of -v.
>
> Please send the complete output of "nginx -V", where the V is a
> capital letter.
>
>
> Also compile nginx without third party modules.
And, also important, u
Hi,
> [root@DNTX002 nginx-1.2.1]# nginx -V
> nginx version: nginx/1.2.1
> built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC)
> configure arguments: --add-module=/root/nginx_mod_h264_streaming-2.2.7
> --with-http_flv_module --with-file-aio --sbin-path=/usr/local/sbin
> --with-debug
nginx_m
Hi,
> One quick question, i've updated nginx to 1.4.7 with http_mp4_module.
> What if i go with the same config as before ? i.e
That should work, as all you really need to do is to use the "mp4;" keyword.
(I was not aware that config keyword is actually exactly the same between
the third party
> I am attempting to install NginX on Ubuntu 12.04 using instructions found
> from the following link:
>
> http://wiki.nginx.org/Install
>
> but I am getting various error messages.
>
> Does anyone have updated instructions for 12.04?
http://nginx.org/en/linux_packages.html#stable
Hi,
> Thanks for the link. On a quick read it seems their conclusion is that
> while it is *extremely* unlikely that your private key(s) was/were
> stolen using nginx, you should still re-key and revoke. While
> comforting, not really of any great practical help.
They updated the post, their ini
> Hi
>
> I was watching this video by fastly ceo http://youtu.be/zrSvoQz1GOs?t=24m44s
> he talks about the nginx ssl handshake versus apache and comes to the
> conclusion that apache was more efficient at mass handshakes due to
> nginx blocking while it calls back to openssl
>
> I was hoping to ge
> Hello all,
>
> My setting works well through nginx->apache but not through
> nginx->varnish->apache
>
> apache is configured to listen to port 8080 . when nginx uses
>
> proxy_pass http://127.0.0.1:8080
>
> the sites are running fine.
>
> If I introduce varnish after nginx by [proxy_pa
1 - 100 of 179 matches
Mail list logo