The portability sacred cow

J

jacob navia

If there is a common sacred cow in this newsgroup it is the
"portability" sacred cow.

In the name of it everything will be accepted. Portability means in this
context, let's take the worst system around, and make it the common
base. Reject all the improvements that other software contexts offer and
just use the worst. This will guarantee that your code runs everywhere

Now, there is no free lunch and the portability crazyness has some
obvious costs.

For instance, OpenBSD has a function free() that is security conscious
and erases the freed memory before reuse. This would have stopped the
heartbleed bug in OpenSSL but it wasn't used.

Why?

Because some systems had a very slow "malloc" function and the OpenSSL
team decided to write their own malloc/free replacements. Also in the
name of portability, all kinds of cruft was left cluttering the code to
cater systems like VMS, for instance, that nobody uses today.

Because, portability has a REAL COST, in terms of levelling by the lower
common denominator, making for a conservative attitude (that fits this
newsgroup very well) that hinders progress in software development and
provokes all kinds of cruft to be added to ALL implementation of a
software to cater for the bug of some system.

Portability should be weighted with USABILITY and STABILITY, two things
that we never discuss since they tend to be non portable :)

In my opinion, portability comes AFTER STABILITY and USABILITY, well
after those.

jacob
 
S

Stefan Ram

jacob navia said:
Because some systems had a very slow "malloc" function and the OpenSSL
team decided to write their own malloc/free replacements. Also in the
name of portability,

What they did was (apparently, and according to your report)
in the name of /efficiency/, not portability.
In my opinion, portability comes AFTER STABILITY and USABILITY, well
after those.

No one forbids you to write programs for a specific platform,
OpenBSD, Windows, Android. Many people did just this and got rich.
 
J

jacob navia

Le 20/04/2014 23:25, Stefan Ram a écrit :
What they did was (apparently, and according to your report)
in the name of/efficiency/, not portability.

No. There were systems that had a bad malloc. To be portable to those
systems without unacceptable efficiency considerations they rewrote
malloc. To port to bad systems was the basic motivation.
 
J

jacob navia

Le 20/04/2014 23:25, Stefan Ram a écrit :
No one forbids you to write programs for a specific platform,
OpenBSD, Windows, Android. Many people did just this and got rich.

Yes, but any mention of code specific to a single system (or to a family
of systems like Unix/windows) is answered with:

go to another discussion group (windows/Posix, whatever)
Not portable, can't discuss here

etc
 
G

Geoff

Le 20/04/2014 23:25, Stefan Ram a écrit :

No. There were systems that had a bad malloc. To be portable to those
systems without unacceptable efficiency considerations they rewrote
malloc. To port to bad systems was the basic motivation.

So, there were systems that had a "bad implementation of malloc",
therefore the OpenSSL team wrote their own bad implementation of
malloc and free, neglected to keep security paramount in their
implementation and didn't properly review and test their
implementation. This is not a "portability" problem.
 
K

Kenny McCormack

So, there were systems that had a "bad implementation of malloc",
therefore the OpenSSL team wrote their own bad implementation of
malloc and free, neglected to keep security paramount in their
implementation and didn't properly review and test their
implementation. This is not a "portability" problem.

It is, when you get right down to it. But I don't expect the trolls of CLC
to get it. Or to ever admit that Jacob might have a valid point.

The point is that user-written stuff isn't ever as good as system-written
stuff (Yes, there are exceptions to this - but they are rare enough to not
be of interest to anyone - other than a CLC troll). So, what Jacob is
saying is that they needlessly re-invented the wheel and the result was as
expected (by those of us who know and expect such things). Essentially,
they fell victim to the "Not Invented Here" syndrome.

As an aside, note that the only CLC-acceptable, portable, way to introduce a
delay in a program is like this:

for (i=1; i<=VERYBIGNUMBER; i++);

(And then, of course, you have to figure out a way to make sure this
doesn't get optimized away. But I digress...)
 
K

Kaz Kylheku

So, there were systems that had a "bad implementation of malloc",
therefore the OpenSSL team wrote their own bad implementation of
malloc and free, neglected to keep security paramount in their
implementation and didn't properly review and test their
implementation. This is not a "portability" problem.

There is always:

#if CONFIG_USE_XMALLOC
void *xmalloc(size_t size); /* use ours */
#else
#define xmalloc malloc /* don't use ours */
#endif

Unconditionally using your custom malloc everywhere because there is
something wrong with some mallocs is silly. (Is that really
the case in OpenSSL?)
 
K

Kenny McCormack

Kaz Kylheku said:
There is always:

#if CONFIG_USE_XMALLOC
void *xmalloc(size_t size); /* use ours */
#else
#define xmalloc malloc /* don't use ours */
#endif

Unconditionally using your custom malloc everywhere because there is
something wrong with some mallocs is silly.

It can be a real problem. They had this problem with GAWK for quite a
while, in re: the strftime() function. I don't know if they still do, so
this information may be out-of-date. But anyway...

The GAWK distribution contains its own implementation of strftime(), which
is better (in some ways) and worse (in others) than some of the "built-in"
implementations of strftime(). Which one to use was often a non-trivial
question. The config (autoconf) script attempted to guess, but wasn't
always right (since it wasn't clear that there [always] was a "right" answer).
In particular, on some systems, the distribution-supplied version didn't do
DST calculatons right, but did most other things better than the built-in
one.

I would imageine that much the same can be said of malloc() implementations.
(Is that really the case in OpenSSL?)

Who knows? Someone will have to actually research this and post here.
Until then, all we have is Jacob's word.
 
G

glen herrmannsfeldt

(snip)
Because some systems had a very slow "malloc" function and the OpenSSL
team decided to write their own malloc/free replacements. Also in the
name of portability, all kinds of cruft was left cluttering the code to
cater systems like VMS, for instance, that nobody uses today.

There has been a lot of discussion about OpenSSL on the VMS newsgroup.

(By people actually running it.)

-- glen
 
I

Ian Collins

In that case perhaps you should only use OpenBSD. They have nice features
like an async-signal-safe snprintf, whereas Linux snprintf uses heap memory
for the simplest formatting (which means you can't depend on it to never
fail, even when you know it shouldn't, which is why glibc's recommendation
to use snprintf instead of strlcpy is idiotic). And Windows _snprintf is
just a horror because it has all the wrong semantics.

Or use Solaris/Illumos which has the same Async-Signal-Safe guarantee.
OpenBSD puts a lot of effort into the small stuff, something that very few
other platforms bother with. And yet, nobody is advocating only targetting
OpenBSD. Linux and Windows are horrible targets if your primary concern is
security, because those platforms emphasize backwaard compatability and
performance more than OpenBSD.

Backwards compatibility and Linux in the same sentence, you jest?? A
compiler writer told me porting their compilers between kernel revisions
was almost as bad as porting between distributions.
For example, OpenBSD just recently changed
time_t to 64-bits on all platforms, including 32-bits platforms. That would
be unthinkable on Windows or Linux.

I bet that broke a few things along the way.
 
D

David Brown

For instance, OpenBSD has a function free() that is security conscious
and erases the freed memory before reuse. This would have stopped the
heartbleed bug in OpenSSL but it wasn't used.

That has no serious connection with the heartbleed bug.

(I am not disagreeing that making their own malloc/free is a bad idea,
and I agree with you that obsessive portability is not good.)

First, if OpenBSD's free() would have been better than standard free()
or OpenSSL's free(), then it would only have helped the Heartbleed bug
on OpenBSD - not on the most commonly used platform (Linux).

Secondly, the problem was that the bug let an attacker read blocks of
memory - while some of these areas might have been free'd, and therefore
cleared if they had used OpenBSD's free(), lots of the rest of the space
could still have been in use and still contain data.

At best, using OpenBSD's free() would have reduced the problem somewhat
on a minor platform. It would hardly have "stopped the heartbleed bug".
 
D

David Brown

OpenBSD's mitigation measures have resulted in the silent remediation of
many such bugs in open source applications.

Perhaps - certainly zeroing out parts of memory that might be leaked
would reduce the information leaked. It will reduce the performance of
the application too. But zeroing free'd memory will not fix all
problems (as noted below).
For example, when OpenBSD recently switched to a 64-bit time_t, many open
source applications broke. The ones which were caught were fixed in the
ports tree and patches submitted upstream.

That's a completely different issue, with no connection to either
malloc/free, OpenSSL, Heartbleed, or security leaks. It is true that
some applications use time_t in a way that depends on the size of the
type - and it is fairly obvious that changing the size will therefore
cause trouble with such applications. It is a good thing for such code
to be spotted and changed long before 2038 - and kudos to the OpenBSD
community for working on that. But there is no connection to Heartbleed
here.
(FWIW, I believe NetBSD beat OpenBSD to switching to a 64-bit time_t. The
same process would have happened in their ports tree, as well.)


OpenBSD has the ability to place a guard page after an allocated block, and
to align sub-page-sized allocations so that even a one-byte read overflow
hits the guard page and segfaults the application.

That has all the pros and cons of tools such as Valgrind. It will help
spot errors (either "normal" bugs, or security holes) - but at a
significant run-time cost. The OpenSSL developers should have used
tools like this during development and testing, and should have tested
against malformed packets (a simple test aginst random data would have
triggered this bug).
 
M

Malcolm McLean

If there is a common sacred cow in this newsgroup it is the
"portability" sacred cow.

Portability should be weighted with USABILITY and STABILITY, two things
that we never discuss since they tend to be non portable :)

In my opinion, portability comes AFTER STABILITY and USABILITY, well
after those.
My view is that you need to separate programs into the bit-shuffling section and the the IO section.
Some parts of the program rearrange bits in memory, other parts manipulate output devices.

Any code that is interesting in a computer science, mathematical, logical, artistic or other sense
will be bit shuffling code. That should be written portably, there's no reason to use any non-portable
constructs, there's no reason to be interested in what type of processor you're running on. (OK,
I haven't addressed parallellisation and other forms of hardware acceleration, it's a general scheme).

The code that does IO is essentially a job for a hacker. It's not necessarily easy to write, but the
difficulties aren't of a fundamental order. It's a case of one human understanding a system designed
by another human and knowing how to interact with it. A lot of it isn't portable, and portability means
something else. It means devising common interfaces, not agreeing on a common language for
specifying algorithms.

Now what do we need to specify an algorithm? Essentially a human-usable Turing complete system.
A way of getting an arbitrary-sized memory buffer, a way of reading and writing to it, conditional
jumps, and, to make things usable, arithmetical and logical operations on scalars and a way of
dividing up code into human-meaningful units or modules . But that's it. Just a thin layer of syntax
over these.
 
K

Kenny McCormack

B******t.

Issues much?

Maybe you need to check into this:

http://en.wikipedia.org/wiki/Anger_management

--
Both the leader of the Mormon Church and the leader of the Catholic
church claim infallibility. Is it any surprise that these two orgs
revile each other? Anybody with any sense knows that 80-yr old codgers
are hardly infallible. Some codgers this age do well to find the crapper
in time and remember to zip-up.
 
J

jacob navia

Le 22/04/2014 11:52, Malcolm McLean a écrit :
Any code that is interesting in a computer science, mathematical, logical, artistic or other sense
will be bit shuffling code.

When you see the big picture this is in general true.

Problem is, even there you need physical ressources that will change
from machine to machine.

Can you read all your data into RAM at once?

Or should you redesign your algorithm to process in variable chunks to
cater for the bad systems that do not have so much RAM?

Should you assume a floating point coprocessor?


Or should you use fixed point for representing your data?

Etc.

Decisions, decisions...

What I propose is to keep in mind those "portability" considerations but
making them a secondary view AFTER the usability and stability issues
are solved.
 
M

Malcolm McLean

And there's also no reason whatsoever to think for one minute it needs
to be portable.

You do go on.
It's of fundamental importance, and, as comments show, something that many people haven't
understood.
There are various thing you might want to do. You might want to decide the best move in a chess
game. You might want to find the roots of a quintic equation. You might want to rotate an image
without creating any new pixel values. None of these things entirely trivial to do. You will probably
have target hardware, operating system, etc that you want the program to run on immediately. But
it's foolish to write any of these things in a platform-specific way. They're bit-shuffling operations.
Then only reason for not writing portably is because you have hardware acceleration, like a dedicated
chess position checking chip. But that's unusual.
And IO is applicable to any "Computer Science" if not more so.
Not really.
 
J

Johannes Bauer

For instance, OpenBSD has a function free() that is security conscious
and erases the freed memory before reuse. This would have stopped the
heartbleed bug in OpenSSL but it wasn't used.

Why do you spread such drivel?

OpenBSD's allocator is so slow that OpenSSL has its own memory
allocator. OpenSSL only allocates one huge chunk of memory from the OS
and then does its own memory management. A free() that is security
conscious would have helped exactly NOTHING because of the constraints.

If you want to spread lies, at least choose lies that aren't immediately
recognized.

Regards,
Johannes

--
Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
- Karl Kaos über Rüdiger Thomas in dsa <[email protected]>
 
J

jacob navia

Le 22/04/2014 16:58, Johannes Bauer a écrit :
Why do you spread such drivel?

Because I have the sources of my information. They are trustworthy.
OpenBSD's allocator is so slow that OpenSSL has its own memory
allocator. OpenSSL only allocates one huge chunk of memory from the OS
and then does its own memory management. A free() that is security
conscious would have helped exactly NOTHING because of the constraints.

You are wrong.
1) It wasn't OpenBSD's allocator that was slow. It was another system.
2) Well, it would have exposed the bug long ago. Here is the mail from
Theo de Raat, the main developer of OpenBSD about this issue:

<begin quote>
So years ago we added exploit mitigations counter measures to libc
malloc and mmap, so that a variety of bugs can be exposed. Such
memory accesses will cause an immediate crash, or even a core dump,
then the bug can be analyed, and fixed forever.

Some other debugging toolkits get them too. To a large extent these
come with almost no performance cost.

But around that time OpenSSL adds a wrapper around malloc & free so
that the library will cache memory on it's own, and not free it to the
protective malloc.

You can find the comment in their sources ...

#ifndef OPENSSL_NO_BUF_FREELISTS
/* On some platforms, malloc() performance is bad enough that you
can't just

OH, because SOME platforms have slow performance, it means even if you
build protective technology into malloc() and free(), it will be
ineffective. On ALL PLATFORMS, because that option is the default,
and Ted's tests show you can't turn it off because they haven't tested
without it in ages.

So then a bug shows up which leaks the content of memory mishandled by
that layer. If the memoory had been properly returned via free, it
would likely have been handed to munmap, and triggered a daemon crash
instead of leaking your keys.

OpenSSL is not developed by a responsible team.
<end quote>

You see now?
If you want to spread lies, at least choose lies that aren't immediately
recognized.

I am not spreading lies. I will hope that you are just not informed of
what is happening.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top