real-time monitoring of propriety system: embedding python in C orembedding C in python?

B

Bas

Hi Group,

at work, we are thinking to replace some legacy application, which is a home-grown scripting language for monitoring and controlling a large experiment. It is able to read live data from sensors, do some simple logic and calculations, send commands to other subsystems and finally generate some new signals. The way it is implemented is that it gets a chunk of 1 second of data (thousands of signals at sample rates from 1Hz to several kHz), does some simple calculations on selected signals, does some simple logic, sends some commands and finally computes some 1Hz output signals, all before the next chunk of data arrives. The purpose is mainly to monitor other fast processes and adjust things like process gains and set-points, like in a SCADA system. (I know about systems like Epics and Tango, but I cannot use those in the near future.) It can be considered soft-real time: it is desirable that the computation finishes within the next second most of the time, but ifthe deadline is missed occasionally, nothing bad should happen. The current system is hard to maintain and is limited in capabilities (no advanced math, no sub-functions, ...).

I hope I don't have to convince you that python would be the perfect language to replace such a home-grown scripting language, especially since you than get all the power of tools like numpy, logging and interface to databases for free. Convincing my colleagues might cost some more effort, so I wantto write a quick (and dirty?) demonstration project. Since all the functions I have to interface with (read and write of live data, sending commands,...) are implemented in C, the solution will require writing both C and python. I have to choose between two architectures:

A) Implement the main program in C. In a loop, get a chunk of data using direct call of C functions, convert data to python variables and call an embedded python interpreter that runs one iteration of the user's algorithm. When the script finishes, you read some variables from the interpreter and then call some other C-function to write the results.

B) Implement the main loop in python. At the beginning of the loop, you call an embedded C function to get new data (using ctypes?), make the result readable from python (memoryview?), do the user's calculation and finally call another C function to write the result.

Are there any advantages for using one method over the other? Note that I have more experience with python than with C.

Thanks,
Bas
 
R

rusi

Since all the functions I have to interface with (read and write of live data, sending
commands, ...) are implemented in C, the solution will require writing both C and python.

Standard embedding/extending is ok when the interface is 'thin' ie the
number of functions going from the C-world to the python-world is not
large.

If you are dealing with a larger bunch of functions (as it seems you
are) you may want to look at swig:
http://www.swig.org/tutorial.html
 
S

Stefan Behnel

Bas, 05.02.2013 16:10:
at work, we are thinking to replace some legacy application, which is a
home-grown scripting language for monitoring and controlling a large
experiment. It is able to read live data from sensors, do some simple
logic and calculations, send commands to other subsystems and finally
generate some new signals. The way it is implemented is that it gets a
chunk of 1 second of data (thousands of signals at sample rates from 1Hz
to several kHz), does some simple calculations on selected signals, does
some simple logic, sends some commands and finally computes some 1Hz
output signals, all before the next chunk of data arrives. The purpose
is mainly to monitor other fast processes and adjust things like process
gains and set-points, like in a SCADA system. (I know about systems like
Epics and Tango, but I cannot use those in the near future.) It can be
considered soft-real time: it is desirable that the computation finishes
within the next second most of the time, but if the deadline is missed
occasionally, nothing bad should happen. The current system is hard to
maintain and is limited in capabilities (no advanced math, no
sub-functions, ...).

I hope I don't have to convince you that python would be the perfect
language to replace such a home-grown scripting language, especially
since you than get all the power of tools like numpy, logging and
interface to databases for free. Convincing my colleagues might cost
some more effort, so I want to write a quick (and dirty?) demonstration
project. Since all the functions I have to interface with (read and
write of live data, sending commands, ...) are implemented in C, the
solution will require writing both C and python.

Or Cython and Python. Cython is a Python dialect that compiles down to C,
so you get native C/C++ interfacing and speed without leaving the wonderful
world of Python or the CPython runtime. It also comes with a lot of cool
features for efficient data processing that interface efficiently with NumPy.

And, yes, you'll most likely end up using NumPy in one way or another. It's
basically Python's data integration layer for high-performance and large
data computation.

I have to choose between two architectures:

A) Implement the main program in C. In a loop, get a chunk of data using
direct call of C functions, convert data to python variables and call an
embedded python interpreter that runs one iteration of the user's
algorithm. When the script finishes, you read some variables from the
interpreter and then call some other C-function to write the results.

B) Implement the main loop in python. At the beginning of the loop, you
call an embedded C function to get new data (using ctypes?), make the
result readable from python (memoryview?), do the user's calculation and
finally call another C function to write the result.

Are there any advantages for using one method over the other? Note that
I have more experience with python than with C.

When it comes to interfacing, there is no difference between both. The only
question is really what starts up the application. This tends to be a bit
simpler if it's Python, but otherwise, it's just a one-time boilerplate
code writing thing.

The usual way to go about it is to write an extension module (commonly in
Cython) and import that from within the running Python runtime. In that
module (a shared library) you implement a wrapper around your C-level API,
often including some more high-level functionality that simplifies and
beautifies the underlying C-ish API for you. Importing that module can be
done using the normal "import" statement from Python code if the Python
runtime is running the app, and is more commonly registered at the C level
if you start Python from C.

That being said, I'd encourage you to go for the "Python runs it all"
approach first, because it tends to be less overhead and less fiddling to
get it working.

Stefan
 
T

Terry Reedy

Option B sounds like it makes your life simpler. Just turn the external
code into a library, use ctypes to call the library and you're done. That
also means reading command line arguments and/or config files can be done
in Python and keep the C code simpler.

This is exactly how I would start. If this is not fast enough for
production, Cython may help.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top