David Rowley <dgrowle...@gmail.com> writes: > Is this really sane?
> As much as I would like to see the 65k limit removed, I just have > reservations about fixing it in this way. Even if we get all the > cases fixed in core, there's likely a whole bunch of extensions > that'll have bugs as a result of this for many years to come. Maybe. I'm not that concerned about planner hacking: almost all of the planner is only concerned with pre-setrefs.c representations and will never see these values. Still, the fact that we had to inject a couple of explicit IS_SPECIAL_VARNO tests is a bit worrisome. (I'm more surprised really that noplace in the executor needed it.) FWIW, experience with those places says that such bugs will be exposed immediately; it's not like they'd lurk undetected "for years". You might argue that the int-vs-Index declaration changes are something that would be much harder to get right, but in reality those are almost entirely cosmetic. We could make them completely so by changing the macro to #define IS_SPECIAL_VARNO(varno) ((int) (varno) < 0) so that it'd still do the right thing when applied to a variable declared as Index. (In the light of morning, I'm not sure why I didn't do that already.) But we've always been extremely cavalier about whether RT indexes should be declared as int or Index, so I felt that standardizing on the former was actually a good side-effect of the patch. Anyway, to address your point more directly: as I recall, the main objection to just increasing the values of these constants was the fear that it'd bloat bitmapsets containing these values. Now on the one hand, this patch has proven that noplace in the core code does that today. On the other hand, there's no certainty that someone might not try to do that tomorrow (if we don't fix it as per this patch); or extensions might be doing so. > I'm really just not sure it's worth all the dev hours fixing the > fallout. To me, it seems much safer to jump bump 65k up to 1m. It'll > be a while before anyone complains about that. TBH, if we're to approach it that way, I'd be inclined to go for broke and raise the values to ~2B. Then (a) we'll be shut of the problem pretty much permanently, and (b) if someone does try to make a bitmapset containing these values, hopefully they'll see performance bad enough to expose the issue immediately. > It's also not that great to see the number of locations that you > needed to add run-time checks for negative varnos. That's not going to > come for free. Since the test is just "< 0", I pretty much disbelieve that argument. There are only two such places in the patch, and neither of them are *that* performance-sensitive. Anyway, the raise-the-values solution does have the advantage of being a four-liner, so I can live with it if that's the consensus. But I do think this way is cleaner in the long run, and I doubt the argument that it'll create any hard-to-detect bugs. regards, tom lane