On Thu, Feb 16, 2012 at 6:30 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: > I wrote: >> BTW, an entirely different line of thought is "why on earth is @@ so >> frickin expensive, when it's comparing already-processed tsvectors >> with only a few entries to an already-processed tsquery with only one >> entry??". This test case suggests to me that there's something >> unnecessarily slow in there, and a bit of micro-optimization effort >> might be well repaid. > > Oh, scratch that: a bit of oprofiling shows that while the tsvectors > aren't all that long, they are long enough to get compressed, and most > of the runtime is going into pglz_decompress not @@ itself. So this > goes back to the known issue that the planner ought to try to account > for detoasting costs.
This issue of detoasting costs comes up a lot, specifically in reference to @@. I wonder if we shouldn't try to apply some quick and dirty hack in time for 9.2, like maybe random_page_cost for every row or every attribute we think will require detoasting. That's obviously going to be an underestimate in many if not most cases, but it would probably still be an improvement over assuming that detoasting is free. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers