Back in the day, these additional (potentially optional) computation units 
(e.g., FPU, GPU, vector engine(s)) were generically referred to as 
“co-processors” as a class. The debate over interface/interaction with them was 
hot … until their access & context became a matter of additional instructions 
in a given “main processor” ISA which compilers were “taught” to emit, and thus 
dispatch and communication was relegated to (main) processor instruction decode 
& scheduling (somewhat more hidden in hardware).

Oh, except for GPUs - they’re still too complex, messy, proprietary for anyone 
to add all those masses of instructions and their execution model to a “main” 
processor ISA.

I think it’d be better to call this a co-processor API (or something else along 
these lines less specific than “FPU”), and type the co-processor and its 
required, additional context for generality - just to look ahead a little bit.

Has anyone using NetBSD played with FPGAs embedded with a processor? I know 
that Intel was talking about adding some to its processors after they bought 
Altera in 2015. There is explicit provision for FPGAs in OpenCL, alongside GPUs:

https://en.wikipedia.org/wiki/OpenCL

        Erik

Reply via email to