Hi all, I've got an application that I'm writing that autogenerates python code which I then execute with exec(). I know that this is not the best way to run things, and I'm not 100% sure as to what I really should do. I've had a look through Programming Python and the Python Cookbook, which have given me ideas, but nothing has gelled yet, so I thought I'd put the question to the community. But first, let me be a little more detailed in what I want to do:
I have a python module (called pyvisi, but you don't need to know that) which attempts to simplify the writing of scripts for high performance computing visualisation applications. What it does is provides a layer between the user and the actual renderer backend that is actually going to process the code. Effectively all my app does is to autogenerate the code the user would have to write were they savvy with the python interface to vtk (the visualisation toolkit). This reduces the effort on the part of the user quite a lot. What I have currently is my python module generating the equivalent vtk-python code and then executing this in an exec(). This is not nice (and potentially very slow), especially if one has to share around data, so what I want to do is have a separate python process or thread which just sits there accepting the autogenerated text strings as if the user were writing them directly at the python prompt (or equivalent), and returning any error messages generated. Also, I want to be able to share data between the two processes/threads so that one doesn't have to turn numerical data into a string which is then turned back into numerical data inside the exec() call (ugly, I know, but it works). Maybe a picture will help as well (time going down the page): Main Proc(1) | |------------> Renderer(2) | | | <-- Data(3) --> | | | | more commands | | ---------------> | | | | even more cmds | | ---------------> | | | | render finished | | shut down Rndrr | | <--------------- | | | main proc continues or finishes (1) the main process where the python code destined for the backend is generated (2) the secondary process which accepts and runs the code it receives (3) data to visualised; shared between the two processes Ok, hopefully you get what I want to do now... So, what is the best way to do this? Threads share memory, so this is handy to share the data around, however, how does one send arbitrary commands to be processed by a thread? One way to do this would be to use pipes, but this means that I can't share the data around as easily. I've also seen the Pyro project as a possibility, but I would like to keep this as "core python" as possible. Any help or advice would be really (really!) appreciated. TIA Paul -- Paul Cochrane Earth Systems Science Computational Centre University of Queensland, Brisbane, Queensland 4072, Australia E: cochrane at esscc dot uq dot edu dot au -- http://mail.python.org/mailman/listinfo/python-list