Hi Group,

at work, we are thinking to replace some legacy application, which is a 
home-grown scripting language for monitoring and controlling a large 
experiment. It is able to read live data from sensors, do some simple logic and 
calculations, send commands to other subsystems and finally generate some new 
signals. The way it is implemented is that it gets a chunk of 1 second of data 
(thousands of signals at sample rates from 1Hz to several kHz), does some 
simple calculations on selected signals, does some simple logic, sends some 
commands and finally computes some 1Hz output signals, all before the next 
chunk of data arrives. The purpose is mainly to monitor other fast processes 
and adjust things like process gains and set-points, like in a SCADA system. (I 
know about systems like Epics and Tango, but I cannot use those in the near 
future.) It can be considered soft-real time: it is desirable that the 
computation finishes within the next second most of the time, but if the 
deadline is missed
  occasionally, nothing bad should happen. The current system is hard to 
maintain and is limited in capabilities (no advanced math, no sub-functions, 
...).

I hope I don't have to convince you that python would be the perfect language 
to replace such a home-grown scripting language, especially since you than get 
all the power of tools like numpy, logging and interface to databases for free. 
Convincing my colleagues might cost some more effort, so I want to write a 
quick (and dirty?) demonstration project. Since all the functions I have to 
interface with (read and write of live data, sending commands, ...) are 
implemented in C, the solution will require writing both C and python. I have 
to choose between two architectures:

A) Implement the main program in C. In a loop, get a chunk of data using direct 
call of C functions, convert data to python variables and call an embedded 
python interpreter that runs one iteration of the user's algorithm. When the 
script finishes, you read some variables from the interpreter and then call 
some other C-function to write the results.

B) Implement the main loop in python. At the beginning of the loop, you call an 
embedded C function to get new data (using ctypes?), make the result readable 
from python (memoryview?), do the user's calculation and finally call another C 
function to write the result.

Are there any advantages for using one method over the other? Note that I have 
more experience with python than with C.

Thanks,
Bas
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to