On 6/7/18 9:27 AM, Reinis Rozitis wrote:
this patch https://github.com/FRiCKLE/ngx_cache_purge/commit/c7345057ad5429617fc0823e92e3fa8043840cef.diff

Noted, thx.

In my case at one project we decided/had to switch to nginx caching from varnish because varnish (even you are using disk based (mmap/file) backend storage) has a memory overhead per cacheable object (like ~1Kb)

While 1Kb doesn't sound much when you start to have milions of objects it adds up and in this case even we had several terabytes of fast SSDs the actual bottleneck ended was there was not enough ramĀ  - the instances had only limited 32 Gb so in general there couldnt be more than 33 milion cached objects. Nginx on the other on the same hardware deals with 800+ milion (and increasing) objects without a problem.

Point taken. Not an issue for my typical use case; may come up in future, so good to remember.

p.s. there is also obviously the ssl thing with varnish vs nginx .. but thats another topic.

No real "vs" or "thing" IME. nginx(ssl terminator) -> varnish -> nginx works quite nicely.

There's also Varnish's terminator, Hitch, as an alternative,

 https://www.varnish-software.com/plus/ssl-tls-support/
 https://github.com/varnish/hitch

which I've been told works well; I haven't bothered since I've already got nginx in place on the backend -- adding a listener on the frontend is trivial.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Reply via email to