Hi, this is with 9.2.4_PGDG / FC18 / 64bit upgraded from 9.1.8 via dump/restore, settings kept for the most part.
Table has 1.5M records, the varchar(100) field in question has a * varchar_ops* and a *varchar_pattern_ops* btree index. 3 Cases: - "MYFIELD" like 'BLA BLA *%*': *OK, about 7 msec* - "MYFIELD" like 'BLA *%* BLA': *OK, about 20 msec* - "MYFIELD" like '*%* BLA BLA': *NOT OK* In the third case, the query will take anywhere between 4200ms and over 83Kms ms to deliver 2 results, in one case I broke it off after 10 MINUTES. I never noticed this sort of behavior in the 9.x series. According to explain, the query resolves into an index-only scan. I tried to turn this off to see how it behaves but the toggle in the .conf apparently has no effect. What I find interesting is that, whereas with the default it would resolve into an index-only scan on the *varchar_pattern_ops* index, after setting indexscan_only=off, it would resolve into an index-only scan on the *varchar_ops* index. Another peculiarity is that the Explain for the bad case does not display the "Sort" icon for the order-by clause, whereas the OK cases do display it. Also, the problem does not appear to be a resource problem, as I have the settings and resources to pull that whole table plus indexes into RAM, yet still, there is constant disk activity during the query in the bad case. And, yes, I DID reindex and/or *vacuum verbose analyze* the table after each relevant change. Thanks for any feedback on this. If you need any further info I'll be happy to help as possible. RD