On Sun, May 22, 2011 at 3:10 PM, Robert Haas <robertmh...@gmail.com> wrote: ... > > However, in this case, there was only one client, so that's not the > problem. I don't really see how to get a big win here. If we want to > be 4x faster, we'd need to cut time per query by 75%. That might > require 75 different optimizations averaging 1% a piece, most likely > none of them trivial. I do confess I'm a bit confused as to why > prepared statements help so much. That is increasing the throughput > by 80%, which is equivalent to decreasing time per query by 45%. That > is a surprisingly big number, and I'd like to better understand where > all that time is going.
On my old 32-bit linux box, the difference is even bigger, 150% increase in throughput (4000 vs 9836 tps) with using prepared statements. By gprof, over half of that extra time is going to planning, specifically standard_planner and its children. Unfortunately once you dig down beyond that level, the time is spread all over the place, so there is no one hot spot to focus on. I've don't trust gprof all that much, so I've also poked at tcop/postgres.c a bit to make it do silly things like parse the statement repeatedly, and throw away all results but the last one (and similar things with analyze/rewriting, and planning) and see how much slower that makes things. Here too the planner is the slow part. But by extrapolating backwards; parsing, analyzing, and planning all together only account for 1/3 of the extra time of not using -M prepared. I don't know where the other 2/3 of the time is lost. It could be, for example, that parsing the command twice does not take twice as long doing it once, due to L1 and instruction caching, in which extrapolation backwards is not very reliable But by both methods, the majority of the extra time that can be accounted for is going to the planner. Cheers, Jeff -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers