2009/6/9 shimi <linux...@shimi.net>: > Look, > > Basically, if your HTTP connection handler handles many connections well - > your module (backend) processing would become the bottleneck. That's how it > usually happens. So if you wrap the requests by a frontend proxy (again, I > recommend nginx) - and just put an error log for the relevant vhosts there, > every time nginx cannot pass a request to backend processing, would be > logged. Then you only need to look at the log, and that's it!
Thanks. I looked at nginx a while ago and it looks like a great performance booster. We might be able to use it for some of the applications if/when it comes to that. I'll consider it for this problem for most applications. On the other hand: 1. It means that we'll have another application-level proxy where right now we are very happy with LVS's performance, transparency and handling of lots of other traffic going on (we also use it for internal VIP forwarding among the various components of the system). I.e. we'll need yet another technology to maintain in addition to LVS, which we are very happy with. 2. One of the applications does lots of TCP/IP-level connection sniffing so it can't be used behind an application-level proxy, it must have a direct connection to the browser (LVS works for us since it acts like a bridge - doesn't touch anything inside the packet except for the destination MAC address). Your suggestion to check nginx incoming vs. outgoing gave me another idea - I'll try to find whether I can get such stats from the LVS server, though LVS itself could be dropping connections due to lack of space to track all of them too (which closes the loop - how can I tell whether nginx' own server doesn't drop incoming connections which nginx itself doesn't know about?). Thanks, --Amos _______________________________________________ Linux-il mailing list Linux-il@cs.huji.ac.il http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il