Hi Anoop, This is great and really valuable information, thank you. .
I'd heard that CloudFlare use a variant of Nginx for providing SSL termination which was why I was hopefully that it would be able to manage our use case. Kind regards, Richard On Tue, 2019-02-12 at 07:31 +0530, Anoop Alias wrote: I maintain an Nginx config generation plugin for a web hosting control panel, where people put on such high number of domains on a server normally and things I notice are 1. Memory consumption by worker process go up when vhost count go up , so we may need to reduce worker count 2. As already mentioned the reload might take a lot of time ,so do nginx -t 3. Even startup will take time as most package maintainers put a nginx -t on ExecPre(similar in non-systemd) which take a lot of time on startup I have read somewhere, Nginx is not good at handling this many vhost defs ,so they use a dynamic setup (like the one in OpenResty) at CloudFlare edge servers for SSL On Tue, Feb 12, 2019 at 1:25 AM Peter Booth via nginx <nginx@nginx.org<mailto:nginx@nginx.org>> wrote: +1 to the openresty suggestion I’ve found that whenever I want to do something gnarly or perverse with nginx, openresty helps me do it in a way that’s maintainable and with any ugliness minimized. It’s like nginx with super-powers! Sent from my iPhone On Feb 11, 2019, at 1:34 PM, Robert Paprocki <rpapro...@fearnothingproductions.net<mailto:rpapro...@fearnothingproductions.net>> wrote: FWIW, this kind of large installation is why solutions like OpenResty exist (providing for dynamic config/cert service/hostname registration without having to worry about the time/expense of re-parsing the Nginx config). On Mon, Feb 11, 2019 at 7:59 AM Richard Paul <rich...@primarysite.net<mailto:rich...@primarysite.net>> wrote: Hi Ben, Thanks for the quick response. That's great to hear, as we'd only get to find this out after putting rather a lot of effort into the process. We'll be hosting these on cloud instances but since those aren't the fastest machines around I'll take the reloading as a word of caution (we're probably going to have to make another bit of application functionality which will handle this so that we're only reloading when we have domain changes rather than on a regular schedule that'd I'd thought would be the simplest method.) I have a plan for the rate limits, but thank you for mentioning it. SANs would reduce the number of vhosts, but I'm not sure about the added complexity of managing the vhost templates and the key/cert naming. Kind regards, Richard On Mon, 2019-02-11 at 16:35 +0100, Ben Schmidt wrote: Hi Richard, we have experience with around 1/4th the vhosts on a single Server, no Issues at all. Reloading can take up to a minute but the Hardware isn't what I would call recent. The only thing that you'll have to watch out are Letsencrypt rate Limits > https://letsencrypt.org/docs/rate-limits/ ##### /etc/letsencrypt/renewal $ ls | wc -l 1647 ##### We switched to using SAN Certs whenever possible. Around 8 years ago I managed a 8000 vHosts Webfarm with a apache. No Issues ether. Cheers, Ben On Mon, Feb 11, 2019 at 4:16 PM rick_pri <nginx-fo...@forum.nginx.org<mailto:nginx-fo...@forum.nginx.org>> wrote: Our current setup is pretty simple, we have a regex capture to ensure that the incoming request is a valid ascii domain name and we serve all our traffic from that. Great ... for us. However, our customers, with about 12000 domain names at present have started to become quite vocal about having HTTPS on their websites, to which we provide a custom CMS and website package, which means we're about to create a new Nginx layer in front of our current servers to terminate TLS. This will require us to set up vhosts for each certificate issued with server names which match what's in the certificate's SAN. To keep this simple we're currently thinking about just having each domain, and www subdomain, on its own certificate (LetsEncrypt) and vhost but that is going to lead, approximately, to the number of vhosts mentioned in the subject line. As such I wanted to put the feelers out to see if anyone else had tried to work with large numbers of vhosts and any issues which they may have come across. Kind regards, Richard Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282986,282986#msg-282986 _______________________________________________ nginx mailing list nginx@nginx.org<mailto:nginx@nginx.org> http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list <mailto:nginx@nginx.org> nginx@nginx.org <http://mailman.nginx.org/mailman/listinfo/nginx> http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx@nginx.org<mailto:nginx@nginx.org> http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx@nginx.org<mailto:nginx@nginx.org> http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx@nginx.org<mailto:nginx@nginx.org> http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list <mailto:nginx@nginx.org> nginx@nginx.org <http://mailman.nginx.org/mailman/listinfo/nginx> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx