> > > Proving that a set of comparison operators are consistent just by > examining their runtime behavior is probably equivalent to solving the > halting problem. I can't see us doing it, or wanting to accept the > overhead of checking it even if it could be done. >
The overhead of checking is very minimal. When we update, we have to just carry the tuple id of the heaptuple and insert transaction id. We check whether they are same with the index snapshot. If it is not same, then we will go ahead and start treating this index as either dropped / as a normal index ( without snapshot ). Since the overhead of dropping / marking it as normal index will occur very rarely, we need not be concerned about that performance impact ( i suppose). The overhead of checking is going to be there only on suspicious user defined functions. ( We can have a flag for is_suspicious ) > > To be a bit more concrete: the typical sort of failure that you could > get from broken btree operators is failure of transitivity, that is > the comparators report A < B and B < C for some A, B, C, but do not say > that A < C when those two values are compared directly. I don't see any > convenient way to detect that as a byproduct of normal index operations, > because you wouldn't typically have a reason to make all three > comparisons in close proximity. Indeed, the searching and sorting > algorithms do their best to avoid making "redundant" comparisons of that > kind. > > I am not saying that we should do analysis of runtime behavior. I am saying that, we would provide a set of built-in functions which will be always stable (with some flag in pg_proc) . We will scan the provided function for any functions that are not in the stable set provided, when it gets created. Now if the function has one such function, then it is declared as suspicious to be broken/volatile. Thanks for the reply, Gokul.