That does not change the fact that you will have a FFI overhead. 

For example pyOpenGL 2 was implemented using the Python C API as a wrapper
to OpenGL pyOpenGL3 was implemented using ctypes, the official FFI of python
(cpython in particular) . Version 3 is 2 to 3 times slower than version 2. 

But then a FFI also has much lower maintenance cost which is the reason why
they moved to a FFI implementations. 

Its really a difficult problem to solve and there is no blue pill solution. 

In my case I dont need to because Unreal has already a nice visual scripting
language but I want to use Pharo with Unreal. FFI is not an option since
unreal builds its own executables and I dont want to hack the engine even
though its open source. 

So I even considered making my own language on top of pharo that will
compile to C++. But obviously that is a very big effort and would mean
sacrificing a lot of nice things about pharo syntax.  

So now I fall back to my initial "lazy" solution of shared memory files.
Share memory between Pharo and Unreal , let those two process communicate
via the share memory and then save the shared memory to a file so I don't
lose live state. Relative easy to do since OS already support and provide
such functionality. 

The pro: is that you can still use Pharo as it is and have access to any C++
functionality
the con: you have to do all the manual work of mapping Pharo messages to C++
functionality, both at pharo but mainly at the C++ side. 

However your post made me wonder if I can still invent a programming
language like LLVM IR, something that will be basically a protocol of
communication between Pharo and C++ and even support onboard features of
Unreal like GC, reflection, hot code reloading, blueprints , game editor
etc. 

It looks like that will be the direction I will be going afterall. Seems
much simpler, easier and practical than anything else. 



--
View this message in context: 
http://forum.world.st/A-new-idea-for-a-project-WarpSpeed-a-C-inliner-tp4881273p4881952.html
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.

Reply via email to