Typed Racket does not have an IR in the sense you describe, and the Typed Racket internals are not exposed in a way that's intended for general consumption. More generally, I think Typed Racket's type system is not likely to be a good fit for GPU computation.
If you want to give it a try, though, you should take a look at the Typed Racket optimizer, and see if the type information there is what you are looking for. You might start by looking at: https://github.com/racket/typed-racket/blob/master/typed-racket-lib/typed-racket/optimizer/fixnum.rkt which has optimizations on Fixnums. Sam On Sun, Dec 16, 2018 at 1:40 AM Neil Van Dyke <n...@neilvandyke.org> wrote: > > Is there a specified/stable Typed Racket intermediate representation > that has all the type info resolved, and which separate projects could > build upon, for other target backends or analysis? > > Reason for asking... > > I was idly thinking of various ways to do GPU/TPU "supercomputing" from > a normal Racket program, or to specify GPU bits in a Racket-ish source > language. > > One general way involves compiling a normal Racket/-ish program to both > Racket VM and (at least some of the procedures or closures, or parts of > same) to a language/IR like for OpenCL (or one of the other > existing/emerging ones). From there, even a simple implementation might > be able to do things like automatically run a chunk/extent of algorithm > on the GPU, when a static/dynamic heuristic suggests that the overhead > of going to GPU is worthwhile. > > Having very little time for this weekend side project, and not wanting > to spend it reimplementing type inferencing or annotation... Is there a > specified/stable IR of Typed Racket that would be easy to work with for > this? (Or would it be easier to do something simple from scratch in > Racket (like a syntax transformation-heavy `#lang`), or to try to adapt > the Pre-Scheme C target for this purpose?) > > (I'm aware we could write numerical bits in, say, a C-like OpenCL > language, and then do Racket FFI of the OpenCL API to run the GPU bits > from our program otherwise coded in Racket. I'm more interested in the > problem of ways of compiling a Racket-ish language to run on the GPU.) > > (Motivation: I have a new GPU computer toy, > "https://www.neilvandyke.org/machine-learning/", and, while some of my > old Racket packages are still great for scraping/importing data for > what's now called "data science", I then have to switch over to various > growing stacks of software tools in other languages. Top-down, I could > write wrappers to use some of those other tools from Racket. But I > wonder whether there are opportunities being missed, from a potential > Racket bottom-up, while most people are busy with the big ML/stats > toolkits and the algorithmic languages that are currently popular in > data science. Also, there is only so much Jupyter Notebook in a Web > browser that a person can take, before they want to also understand more > about the new metal. :) > > -- > You received this message because you are subscribed to the Google Groups > "Racket Users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to racket-users+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.