Hi, I was looking into using proxy_cache_lock mechanism to collapse upstream requests and reduce traffic. It works great right out of the box but one issue I found was that, if there are n client requests proxy_cache_locked, only one of those clients get the response as soon as the upstream sends the response to Nginx, the rest of n-1 clients wait till the response is fully flushed to the cache file by Nginx, after which the locked requests serve the response as HITs from the cached file.
Please correct me if this understanding of mine is incorrect. If it is correct I have two questions. 1. Are there any efforts to support streaming the response back to all the proxy_cache_lock'ed clients simultaneously? If this is not prioritized, I would like to know the reasoning behind it, so i can make an informed decision on how should i proceed next. 2. Why was 500ms chosen as the wait time value for ngx_file_cache_lock_wait event? Making locked requests wait half-a-second would drive up the TTFB for live streaming customers. Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290604,290604#msg-290604 _______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx