On Nov 24, 2007 11:35 AM, Simon Riggs <[EMAIL PROTECTED]> wrote: > On the plus side, there are many very savvy people out there too and all > the performance features we put in are being used in serious ways. But > we must cater for both the top end and bottom end of the application > spectrum.
Totally agree with Simon. PostgreSQL is my database of choice for every application because it's fast, rock solid and highly consistent. I would rather not advice people to use MySQL because their application is too simple, doesn't use prepared statements or any other reason. Moreover, AFAIK, the use of prepared statements is not always a good solution, especially when there are big variations in statistics depending on the input. And I get this overhead with more complicated queries also, queries which won't perform well if I use the same plan for all values of the parameters. And this is not an hypothetical situation as the data of this particular database are far from being equally distributed (a lot of information for big cities, a few for small cities). I must admit I'm used to see every PostgreSQL version going faster than the previous one :). Perhaps, synchronized scans or the optimization of Florian will get the database faster after all. I can't really know at this time. But they have to get my database 4% faster to compensate the current loss. Tom, from my tests, the slow down goes down from 8% to 4% but it's still there and measurable. It's pretty consistent with the fact that you only saw a 3% slow down in your tests. The fact that you had only 3% overhead is still bugging me though. I'll dig a bit further to see if I can find something interesting or if there's something wrong with my setup. -- Guillaume ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org