Hi there, apologies if this has already been asked, I couldn't find relevant information on the matter.
Recently I've come across some faulty applications (notably, old Minecraft versions) that were calling `glBind*` functions with IDs of -1, resulting in very slow performance due to the hash table defaulting to full lookups. Skimming through the source code I've noticed that a while ago, name reusing/sparse names were implemented using idalloc (see MR mesa/mesa!6600), locked behind a config flag. Indeed, setting the force_gl_names_reuse=true environment variable fixes the performance issue. The comment on the MR that added this feature mentions that this behavior is in line with that of major proprietary drivers, so I was wondering why this was added as an optional flag, instead of making it the default behavior. I did find some potential reasons that could have blocked the transition back when it was merged, though. For example, more recently a commit was made (MR mesa/mesa!30106) lowering virtual memory usage of the idalloc approach from 512MB to 512KB in the worst-case scenario, so perhaps the higher memory footpring might have discouraged adoption at the beginning. With such low requirements now, however, perhaps it's worth reconsidering the default choice to benefit from higher robustness against misuse of the API. Is there anything I missed? Rocco