2.6, 3.0, and truly independent intepreters

A

Andy O'Meara

Multiprocessing is written in C, so as for the "less agile" - I don't
see how it's any less agile then what you've talked about.

Sorry for not being more specific there, but by "less agile" I meant
that an app's codebase is less agile if python is an absolute
requirement. If I was told tomorrow that for some reason we had to
drop python and go with something else, it's my job to have chosen a
codebase path/roadmap such that my response back isn't just "well,
we're screwed then." Consider modern PC games. They have huge code
bases that use DirectX and OpenGL and having a roadmap of flexibility
is paramount so packages they choose to use are used in a contained
and hedged fashion. It's a survival tactic for a company not to
entrench themselves in a package or technology if they don't have to
(and that's what I keep trying to raise in the thread--that the python
dev community should embrace development that makes python a leading
candidate for lightweight use). Companies want to build a flexible,
powerful codebases that are married to as few components as
possible.
I would argue that the reason most people use threads as opposed to
processes is simply based on "ease of use and entry" (which is ironic,
given how many problems it causes).

No, we're in agreement here -- I was just trying to offer a more
detailed explanation of "ease of use". It's "easy" because memory is
shared and no IPC, serialization, or special allocator code is
required. And as we both agree, it's far from "easy" once those
threads to interact with each other. But again, my goal here is to
stay on the "embarrassingly easy" parallelization scenarios.

I would argue that most of the people taking part in this discussion
are working on "real world" applications - sure, multiprocessing as it
exists today, right now - may not support your use case, but it was
evaluated to fit *many* use cases.

And as I've mentioned, it's a totally great endeavor to be super proud
of. That suite of functionality alone opens some *huge* doors for
python and I hope folks that use it appreciate how much time and
thought that undoubtably had to go into it. You get total props, for
sure, and you're work is a huge and unique credit to the community.
Please correct me if I am wrong in understanding what you want: You
are making threads in another language (not via the threading API),
embed python in those threads, but you want to be able to share
objects/state between those threads, and independent interpreters. You
want to be able to pass state from one interpreter to another via
shared memory (e.g. pointers/contexts/etc).

Example:

ParentAppFoo makes 10 threads (in C)
Each thread gets an itty bitty python interpreter
ParentAppFoo gets a object(video) to render
Rather then marshal that object, you pass a pointer to the object to
the children
You want to pass that pointer to an existing, or newly created itty
bitty python interpreter for mangling
Itty bitty python interpreter passes the object back to a C module via
a pointer/context

If the above is wrong, I think possible outlining it in the above form
may help people conceptualize it - I really don't think you're talking
about python-level processes or threads.

Yeah, you have it right-on there, with added fact that the C and
python execution (and data access) are highly intertwined (so getting
and releasing the GIL would have to be happening all over). For
example, consider and the dynamics, logic, algorithms, and data
structures associated with image and video effects and image and video
image recognition/analysis.


Andy
 
J

Jesse Noller

Sorry for not being more specific there, but by "less agile" I meant
that an app's codebase is less agile if python is an absolute
requirement. If I was told tomorrow that for some reason we had to
drop python and go with something else, it's my job to have chosen a
codebase path/roadmap such that my response back isn't just "well,
we're screwed then." Consider modern PC games. They have huge code
bases that use DirectX and OpenGL and having a roadmap of flexibility
is paramount so packages they choose to use are used in a contained
and hedged fashion. It's a survival tactic for a company not to
entrench themselves in a package or technology if they don't have to
(and that's what I keep trying to raise in the thread--that the python
dev community should embrace development that makes python a leading
candidate for lightweight use). Companies want to build a flexible,
powerful codebases that are married to as few components as
possible.


No, we're in agreement here -- I was just trying to offer a more
detailed explanation of "ease of use". It's "easy" because memory is
shared and no IPC, serialization, or special allocator code is
required. And as we both agree, it's far from "easy" once those
threads to interact with each other. But again, my goal here is to
stay on the "embarrassingly easy" parallelization scenarios.

That's why when I'm using threads, I stick to Queues. :)
And as I've mentioned, it's a totally great endeavor to be super proud
of. That suite of functionality alone opens some *huge* doors for
python and I hope folks that use it appreciate how much time and
thought that undoubtably had to go into it. You get total props, for
sure, and you're work is a huge and unique credit to the community.

Thanks - I'm just a cheerleader and pusher-into-core, R Oudkerk is the
implementor. He and everyone else who has helped deserve more credit
than me by far.

My main interest, and the reason I brought it up (again) is that I'm
interested in making it better :)
Yeah, you have it right-on there, with added fact that the C and
python execution (and data access) are highly intertwined (so getting
and releasing the GIL would have to be happening all over). For
example, consider and the dynamics, logic, algorithms, and data
structures associated with image and video effects and image and video
image recognition/analysis.

okie doke!
 
P

Paul Boddie

3) Start a new python implementation, let's call it "CPythonES"
[...]

4) Drop python, switch to Lua.

Have you looked at tinypy? I'm not sure about the concurrency aspects
of the implementation, but the developers are not completely
unfamiliar with game development, and there is a certain amount of
influence from Lua:

http://www.tinypy.org/

It might also be a more appropriate starting point than CPython for
experimentation.

Paul
 
T

Terry Reedy

Andy said:
I don't follow you there... Performance-critical code in Python??

Martin meant what he said better later
I tried to list some abbreviated examples in other posts, but here's
some elaboration: ....
The common pattern here is where there's a serious mix of C and python
code and data structures,

I get the feeling that what you are doing is more variegated that what
most others are doing with Python. And the reason is that what you are
doing is apparently not possible with *stock* CPython. Again, it is a
chicken-and-egg type problem.

You might find this of interest from the PyDev list just hours ago.
"""
Hi to all Python developers

For a student project in a course on virtual machines, we are
evaluating the possibility to
experiment with removing the GIL from CPython

We have read the arguments against doing this at
http://www.python.org/doc/faq/library/#can-t-we-get-rid-of-the-global-interpreter-lock.

But we think it might be possible to do this with a different approach
than what has been tried till now.

The main reason for the necessity of the GIL is reference counting.

We believe that most of the slowdown in the free threading
implementation of Greg Stein was due to the need of atomic
refcounting, as this mail seems to confirm:
http://mail.python.org/pipermail/python-ideas/2007-April/000414.html

So we want to change CPython into having a "real" garbage collector -
removing all reference counting, and then the need for locks (or
atomic inc/dec ops) should be
highly alleviated.

Preferably the GC should be a high-performance one for instance a
generational one.

We believe that it can run quite a lot faster than ref-counting.

Shared datastructures would get their lock obviously.
Immutable objects (especially shared global objects, like True, False, Null)
would not.

Most of the interpreter structure would be per-thread, at that point.

We do not know how Greg Stein did his locking in the free threads
patch, but as a part of the course we learned there exists much faster
ways of locking than using OS-locks (faster for the uncontented case)
that are used in e.g. the HOT-SPOT java-compiler. This might make
"free threading" in python more attractive than some pessimists think.
(http://blogs.sun.com/dave/entry/biased_locking_in_hotspot)
In particular, we are talking about making the uncontended case go fast,
not about the independent part of stack-allocating the mutex
structure, which can only be done and is only needed in Java.

These ideas are similar to the ones used by Linux fast mutexes
(futexes), the implementation of mutexes in NPTL.

We have read this mail thread - so it seems that our idea surfaced,
but Greg didn't completely love it (he wanted to optimize refcounting
instead):
http://mail.python.org/pipermail/python-ideas/2007-April/000436.html

He was not totally negative however. His main objections are about:
- cache locality (He is in our opinion partially right, as seen in some
other paper time ago - any GC, copying GC in particular, doubles the
amount of used memory, so it's less cache-friendly). But still GCs are
overall competitive or faster than explicit management, and surely
much faster of refcounting.

We know it is the plan for PyPy to work in this way, and also that
Jython and Ironpython works like that (using the host vm's GC), so it
seems to be somehow agreeable with the python semantics (perhaps not
really with __del__ but they are not really nice anyway).

Was this ever tried for CPython?

Any other comments, encouragements or warnings on the project-idea?

Best regards: Paolo, Sigurd <[email protected]>
"""

Guido's response
"
It's not that I have any love for the GIL, it just is the best
compromise I could find. I expect that you won't be able to do better,
but I wish you luck anyway.
"

And a bit more explanation from Van Lindberg
"
Just an FYI, these two particular students already introduced themselves
on the PyPy list. Paolo is a masters student with experience in the
Linux kernel; Sigurd is a PhD candidate.

Their professor is Lars Bak, the lead architect of the Google V8
Javascript engine. They spent some time working on V8 in the last couple
months.
"

I agree that you should continue the discussion. Just let Martin ignore
it for awhile until you need further input from him.

Terry Jan Reedy
 
M

Martin v. Löwis

Why do you think so? For C code that is carefully written, the GIL
I don't follow you there. If you're referring to multiprocessing

No, I'm not. I refer to regular, plain, multi-threading.
I don't follow you there... Performance-critical code in Python??

I probably expressed myself incorrectly (being not a native speaker
of English): If you were writing performance-critical in Python,
you should reconsider (i.e. you should rewrite it in C).

It's not clear whether this calling back into Python is in the
performance-critical path. If it is, then reconsider.
I tried to list some abbreviated examples in other posts, but here's
some elaboration:

- Pixel-level effects and filters, where some filters may use C procs
while others may call back into the interpreter to execute logic --
while some do both, multiple times.

Ok. For a plain C proc, release the GIL before the proc, and reacquire
it afterwards. For a proc that calls into the interpreter:
a) if it is performance-critical, reconsider writing it in C, or
reformulate so that it stops being performance critical (e.g.
through caching)
b) else, reacquire the GIL before calling back into Python, then
release the GIL before continuing the proc
- Image and video analysis/recognition where there's TONS of intricate
data structures and logic. Those data structures and logic are
easiest to develop and maintain in python, but you'll often want to
call back to C procs which will, in turn, want to access Python (as
well as C-level) data structures.

Not sure what the processing is, or what processing you need to do.
The data structures themselves are surely not performance critical
(not being algorithms). If you really run Python algorithms on these
structures, then my approach won't help you (except for the general
recommendation to find some expensive sub-algorithm and rewrite that
in C, so that it both becomes faster and can release the GIL).
It's just not practical to be
locking and locking the GIL when you want to operate on python data
structures or call back into python.

This I don't understand. I find that fairly easy to do.
You seem to have placed the burden of proof on my shoulders for an app
to deserve the ability to free-thread when using 3rd party packages,
so how about we just agree it's not an unreasonable desire for a
package (such as python) to support it and move on with the
discussion.

Not at all - I don't want a proof. I just want agreement on Jesse
Noller's claim

# A c-level module, on the other hand, can sidestep/release
# the GIL at will, and go on it's merry way and process away.
Well, most others here seem to have a lot different definition of what
qualifies as a "futile" discussion, so how about you allow the rest of
us continue to discuss these issues and possible solutions. And, for
the record, I've said multiple times I'm ready to contribute
monetarily, professionally, and personally, so if that doesn't qualify
as the precursor to "code contributions from one of the participants"
then I don't know WHAT does.

Ok, I apologize for having misunderstood you here.

Regards,
Martin
 
P

Patrick Stinson

Right. Sounds, and is, easy, if the data is all directly allocated by the
application. But when pieces are allocated by 3rd party libraries, that use
the C-runtime allocator directly, then it becomes more difficult to keep
everything in shared memory.

good point.
One _could_ replace the C-runtime allocator, I suppose, but that could have
some adverse effects on other code, that doesn't need its data to be in
shared memory. So it is somewhat between a rock and a hard place.

ewww scary. mousetraps for sale?
 
G

Glenn Linderman

If you do not have shared memory: You don't need threads, ergo: You
don't get penalized by the GIL. Threads are only useful when you need
to have that requirement of large in-memory data structures shared and
modified by a pool of workers.

The whole point of this thread is to talk about large in-memory data
structures that are shared and modified by a pool of workers.

My reference to shared memory was specifically referring to the concept
of sharing memory between processes... a particular OS feature that is
called shared memory.

The need for sharing memory among a pool of workers is still the
premise. Threads do that automatically, without the need for the OS
shared memory feature, that brings with it the need for a special
allocator to allocate memory in the shared memory area vs the rest of
the address space.

Not to pick on you, particularly, Jesse, but this particular response
made me finally understand why there has been so much repetition of the
same issues and positions over and over and over in this thread: instead
of comprehending the whole issue, people are responding to small
fragments of it, with opinions that may be perfectly reasonable for that
fragment, but missing the big picture, or the explanation made when the
same issue was raised in a different sub-thread.
 
P

Patrick Stinson

Speaking of the big picture, is this how it normally works when
someone says "Here's some code and a problem and I'm willing to pay
for a solution?" I've never really walked that path with a project of
this complexity (I guess it's the backwards-compatibility that makes
it confusing), but is this problem just too complex so we have to keep
talking and talking on forum after forum? Afraid to fork? I know I am.
How many people are qualified to tackle Andy's problem? Are all of
them busy or uninterested? Is the current code in a tight spot where
it just can't be fixed without really jabbing that FORK in so deep
that the patch will die when your project does?

Personally I think this problem is super-awesome on the hobbyest's fun
scale. I'd totally take the time to let my patch do the talking but I
haven't read enough of the (2.5) code. So, I resort to simply reading
the newsgroups and python code to better understand the mechanics
problem :(
 
R

Rhamphoryncus

Speaking of the big picture, is this how it normally works when
someone says "Here's some code and a problem and I'm willing to pay
for a solution?" I've never really walked that path with a project of
this complexity (I guess it's the backwards-compatibility that makes
it confusing), but is this problem just too complex so we have to keep
talking and talking on forum after forum? Afraid to fork? I know I am.
How many people are qualified to tackle Andy's problem? Are all of
them busy or uninterested? Is the current code in a tight spot where
it just can't be fixed without really jabbing that FORK in so deep
that the patch will die when your project does?

Personally I think this problem is super-awesome on the hobbyest's fun
scale. I'd totally take the time to let my patch do the talking but I
haven't read enough of the (2.5) code. So, I resort to simply reading
the newsgroups and python code to better understand the mechanics
problem :(

The scale of this issue is why so little progress gets made, yes. I
intend to solve it regardless of getting paid (and have been working
on various aspects for quite a while now), but as you can see from
this thread it's very difficult to convince anybody that my approach
is the *right* approach.
 
A

alex23

I don't follow you there.  If you're referring to multiprocessing, our
concerns are:

- Maturity (am I willing to tell my partners and employees that I'm
betting our future on a brand-new module that imposes significant
restrictions as to how our app operates?)
- Liability (am I ready to invest our resources into lots of new
python module-specific code to find out that a platform that we want
to target isn't supported or has problems?).  Like it not, we're a
company and we have to show sensitivity about new or fringe packages
that make our codebase less agile -- C/C++ continues to win the day in
that department.

I don't follow this...wouldn't both of these concerns be even more
true for modifying the CPython interpreter to provide the
functionality you want?
 
G

greg

Patrick said:
Speaking of the big picture, is this how it normally works when
someone says "Here's some code and a problem and I'm willing to pay
for a solution?"

In an open-source volunteer context, time is generally more
valuable than money. Most people can't just drop part of
their regular employment temporarily, so unless there's
quite a *lot* of money being offered (enough to offer someone
full-time employment, for example) it doesn't necessarily
make any more man-hours available.
 
A

Andy O'Meara

I don't follow this...wouldn't both of these concerns be even more
true for modifying the CPython interpreter to provide the
functionality you want?


A great point, for sure. So, basically, the motivation and goal of
this entire thread is to get an understanding for how enthusiastic/
interested the CPython dev community is at the concepts/enhancements
under discussion and for all of us to better understand the root
issues. So my response is basically that it was my intention to seek
official/sanctioned development (and contribute developer direct
support and compensation).

My hope was that the increasing interest and value associated with
flexible, multi-core/"free-thread" support is at a point where there's
a critical mass of CPython developer interest (as indicated by various
serious projects specifically meant to offer this support).
Unfortunately, based on the posts in this thread, it's becoming clear
that the scale of code changes, design changes, and testing that are
necessary in order to offer this support is just too large unless the
entire community is committed to the cause.

Meanwhile, as many posts in the thread have pointed out, issues such
as free threading and easy/clean/compartmentalized use of python are
of rising importance to app developers shopping for an interpreter to
embed. So unless/until CPython offers the flexibility some apps
require as an embedded interpreter, we commercial guys are
unfortunately forced to use alternatives to python. I just think it'd
be huge win for everyone (app developers, the python dev community,
and python proliferation in general) if python made its way into more
commercial and industrial applications (in an embedded capacity).


Andy
 
L

lkcl

Their professor is Lars Bak, the lead architect of the Google V8Javascriptengine. They spent some time working on V8 in the last couple
months.

then they will be at home with pyv8 - which is a combination of the
pyjamas python-to-javascript compiler and google's v8 engine.

in pyv8, thanks to v8 (and the judicious application of boost) it's
possible to call out to external c-based modules.

so not only do you get the benefits of the (much) faster execution
speed of v8, along with its garbage collection, but also you still get
access to external modules.

so... their project's done, already!

l.
 
S

sturlamolden

My hope was that the increasing interest and value associated with
flexible, multi-core/"free-thread" support is at a point where there's
a critical mass of CPython developer interest (as indicated by various
serious projects specifically meant to offer this support).
Unfortunately, based on the posts in this thread, it's becoming clear
that the scale of code changes, design changes, and testing that are
necessary in order to offer this support is just too large unless the
entire community is committed to the cause.

I've been watching this debate from the side line.

First let me say that there are several solutions to the "multicore"
problem. Multiple independendent interpreters embedded in a process is
one possibility, but not the only. Unwillingness to implement this in
CPython does not imply unwillingness to exploit the next generation of
processors.

One thing that should be done, is to make sure the Python interpreter
and standard libraries release the GIL wherever they can.

The multiprocessing package has almost the same API as you would get
from your suggestion, the only difference being that multiple
processes is involved. This is however hidden from the user, and
(almost) hidden from the programmer.

Let see what multiprocessing can do:

- Independent interpreters? Yes.
- Shared memory? Yes.
- Shared (proxy) objects? Yes.
- Synchronization objects (locks, etc.)? Yes.
- IPC? Yes.
- Queues? Yes.
- API different from threads? Not really.

Here is one example of what the multiprocessing package can do,
written by yours truly:

http://scipy.org/Cookbook/KDTree

Multicore programming is also more than using more than one thread or
process. There is something called 'load balancing'. If you want to
make efficient use of more than one core, not only must the serial
algorithm be expressed as parallel, you must also take care to
distribute the work evenly. Further, one should avoid as much resource
contention as possible, and avoid races, deadlocks and livelocks.
Java's concurrent package has sophisticated load balancers like the
work-stealing scheduler in ForkJoin. Efficient multicore programming
needs other abstractions than the 'thread' object (cf. what cilk++ is
trying to do). It would certainly be possible to make Python do
something similar. And whether threads or processes is responsible for
the concurrency is not at all important. Today it it is easiest to
achieve multicore concurrency on CPython using multiple processes.

The most 'advanced' language for multicore programming today is
Erlang. It uses a 'share-nothing' message-passing strategy. Python can
do the same as Erlang using the Candygram package
(candygram.sourceforege.net). Changing the Candygram package to use
Multiprocessing instead of Python threads is not a major undertaking.

The GIL is not evil by the way. SBCL also has a lock that protects the
compiler. Ruby is getting a GIL.

So all it comes down to is this:

Why do you want multiple independent interpreters in a process, as
opposed to multiple processes?

Even if you did manage to embed multiple interpreters in a process, it
would not give the programmer any benefit over the multiprocessing
package. If you have multiple embedded interpreters, they cannot share
anything. They must communicate serialized objects or use proxy
objects. That is the same thing the multiprocessing package do.

So why do you want this particular solution?






S.M.
 
S

sturlamolden

If you are serious about multicore programming, take a look at:

http://www.cilk.com/

Now if we could make Python do something like that, people would
perhaps start to think about writing Python programs for more than one
processor.
 
A

Andy O'Meara

First let me say that there are several solutions to the "multicore"
problem. Multiple independendent interpreters embedded in a process is
one possibility, but not the only.''

No one is disagrees there. However, motivation of this thread has
been to make people here consider that it's much more preferable for
CPython have has few restrictions as possible with how it's used. I
think many people here assume that python is the showcase item in
industrial and commercial use, but it's generally just one of many
pieces of machinery that serve the app's function (so "the tail can't
wag the dog" when it comes to app design). Some people in this thread
have made comments such as "make your app run in python" or "change
your app requirements" but in the world of production schedules and
making sure payroll is met, those options just can't happen. People
in the scientific and academic communities have to understand that the
dynamics in commercial software are can be *very* different needs and
have to show some open-mindedness there.

The multiprocessing package has almost the same API as you would get
from your suggestion, the only difference being that multiple
processes is involved.

As other posts have gone into extensive detail, multiprocessing
unfortunately don't handle the massive/complex data structures
situation (see my posts regarding real-time video processing). I'm
not sure if you've followed all the discussion, but multiple processes
is off the table (this is discussed at length, so just flip back into
the thread history).


Andy
 
S

sturlamolden

People
in the scientific and academic communities have to understand that the
dynamics in commercial software are can be *very* different needs and
have to show some open-mindedness there.

You are beware that BDFL's employer is a company called Google? Python
is not just used in academic settings.

Furthermore, I gave you a link to cilk++. This is a simple tool that
allows you to parallelize existing C or C++ software using three small
keywords. This is the kind of tool I believe would be useful. That is
not an academic judgement. It makes it easy to take existing software
and make it run efficiently on multicore processors.


As other posts have gone into extensive detail, multiprocessing
unfortunately don't handle the massive/complex data structures
situation (see my posts regarding real-time video processing).  

That is something I don't believe. Why can't multiprocessing handle
that? Is using a proxy object out of the question? Is putting the
complex object in shared memory out of the question? Is having
multiple copies of the object out of the question (did you see my kd-
tree example)? Using multiple independent interpreters inside a
process does not make this any easier. For Christ sake, researchers
write global climate models using MPI. And you think a toy problem
like 'real-time video processing' is a show stopper for using multiple
processes.
 
P

Paul Boddie

If you are serious about multicore programming, take a look at:

http://www.cilk.com/

Now if we could make Python do something like that, people would
perhaps start to think about writing Python programs for more than one
processor.

The language features look a lot like what others have already been
offering for a while: keywords for parallelised constructs (clik_for)
which are employed by solutions for various languages (C# and various C
++ libraries spring immediately to mind); spawning and synchronisation
are typically supported in existing Python solutions, although
obviously not using language keywords. The more interesting aspects of
the referenced technology seem to be hyperobjects which, as far as I
can tell, are shared global objects, along with the way the work
actually gets distributed and scheduled - something which would
require slashing through the white paper aspects of the referenced
site and actually reading the academic papers associated with the
work.

I've considered doing something like hyperobjects for a while, and
this does fit in somewhat with recent discussions about shared memory
and managing contention for that resource using the communications
channels found in, amongst other solutions, the pprocess module. I
currently have no real motivation to implement this myself, however.

Paul
 
A

Andy O'Meara

You are beware that BDFL's employer is a company called Google? Python
is not just used in academic settings.

Turns out I have heard of Google (and how about you be a little more
courteous). If you've read the posts in this thread, you'll note that
the needs outlined in this thread are quite different than the needs
and interests of Google. Note that my point was that python *could*
and *should* be used more in end-user/desktop applications, but it
can't "wag the dog" to use my earlier statement.
Furthermore, I gave you a link to cilk++. This is a simple tool that
allows you to parallelize existing C or C++ software using three small
keywords.

Sorry if it wasn't clear, but we need the features associated with an
embedded interpreter. I checked out clik++ when you linked it and
although it seems pretty cool, it's not a good fit for us for a number
of reasons. Also, we like the idea of helping support a FOSS project
rather than license a proprietary product (again, to be clear, using
cilk isn't even appropriate for our situation).

That is something I don't believe. Why can't multiprocessing handle
that?

In a few earlier posts, I went into details what's meant there:

http://groups.google.com/group/comp...a1b2/09aaca3d94ee7a04?lnk=st#09aaca3d94ee7a04
http://groups.google.com/group/comp.lang.python/msg/edae2840ab432344
http://groups.google.com/group/comp.lang.python/msg/5be213c31519217b
For Christ sake, researchers
write global climate models using MPI. And you think a toy problem
like 'real-time video processing' is a show stopper for using multiple
processes.

I'm not sure why you're posting this sort of stuff when it seems like
you haven't checked out earlier posts in the this thread. Also, you
do yourself and the people here a disservice in the way that you're
speaking to me here. You never know who you're really talking to or
who's reading.


Andy
 
P

Paul Boddie

I'm not sure why you're posting this sort of stuff when it seems like
you haven't checked out earlier posts in the this thread. Also, you
do yourself and the people here a disservice in the way that you're
speaking to me here. You never know who you're really talking to or
who's reading.

I think your remarks about "people in the scientific and academic
communities" went down the wrong way, giving (or perhaps reinforcing)
the impression that such people live carefree lives and write software
unconstrained by external factors.

Anyway, to keep things constructive, I should ask (again) whether you
looked at tinypy [1] and whether that might possibly satisfy your
embedded requirements. As I noted before, the developers might share
your outlook on a number of matters. Otherwise, you might peruse the
list of Python implementations:

http://wiki.python.org/moin/implementation

Paul

[1] http://www.tinypy.org/
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top