One of the patterns I use a lot at work is when there are a number of 
options for a certain model, and we want to be able to just select one to 
use.  We also want to be able to change which model is being used part way 
through the overall execution of the code.  In C++, we have an abstract 
base class, and then each model fills in the virtual methods for its 
particular algorithm.  Some manager class keeps a pointer to the base 
class, and we can change that pointer to change the model.

I tried doing this same thing in Julia.  I have a few versions of the code, 
they are all in a gist: 
https://gist.github.com/danielmatz/1a64cded91f996d40b99.

The version slow.jl is the approach I would use in C++.  Since I knew that 
using abstract types directly was a performance hit, I mdd a second 
version, fast.jl, that instead uses an integer flag to indicate which model 
to use.  This second version is 10x faster and uses less memory.

I'm curious about why this performance difference is so pronounced.  What 
exactly is going on that makes virtualization "expensive"?  The Julia 
dispatch code must be doing something more than how I conceptualize it 
(i.e., as a set of if statements).  I'm just curious what that is.

To try to investigate this, I made yet another version, other.jl (yeah, 
stupid names, sorry), that is like slow.jl, but instead imitates dispatch 
with isa.  Even this is faster, by about 2x!

I'd love to learn more about what is going on in these three cases.  I'm 
also curious if this is something that is expected to get faster in the 
future?

Thanks for the help.  And thanks for being patient with simple minded 
engineers like myself…  Hopefully I didn't screw up the terminology too 
badly.

Daniel

Reply via email to