"Darryl L. Miles" <darryl-mailingli...@netbauds.net> writes:
> So my next question is what support is there in the various formats, > technologies and runtime libraries to provide a backwards compatible > solution, such that a binary from one system when put on another can > have any hardware incompatibilities detected at the soonest > opportunity, for example upon execution, soon after execution, during > DSO loading: Different architectures have adopted different approaches here. I believe the most common one is to record the required architecture in the e_flags field of the ELF header. ARM has adoped a more baroque solution involving a special attributes section. Either way, the linker combines this information such that the resulting executable is marked for the required architecture. The kernel or dynamic linker may then check the architecture, and give an error if the system does not support it. Unfortunately, as far as I know, no such solution was ever adopted for the x86 family. So this does not help your immediate problem, and will not help it until somebody implements such an approach for x86. > The next matter is has anyone done any studies on the performance > difference when enablement of newer instructions is possible for > "general purpose code generation". I'm not so interested in > specialized use cases such as codecs, compression, encryption, > graphics, etc... I consider these specialized use cases for which many > applications and libraries already have a workable solution by > "guarding" the execution of instructions that optimize such algorithms > by checking the CPU runtime support. I'm interested in the facts on > how much benefit regular code gets from this choice. I don't know of any studies, but it's clear that automatic vectorization using the SSE instructions can help a range of different programs. Ian