Arithmetic overflow checking

A

Arne Vajhøj

PS: my approach would be a new keyword like strictfp (which may modify
floating point semantics compared to not-applying it),
just one for modifying integral semantics: arithmetic overflow checks
and runtime checks on cast to narrower types. Just like use of
"strictfp" can be expected to slow down programs, the same would be
acceptable for an integral pendant.

Somehow I think that sounds very Java'ish.

(and if someone are in doubt then that is good when the context
is Java)

Arne
 
A

Arne Vajhøj

I should point out that I'm not talking about arithmetic overflow
checking or any other specific feature here. I am quite unconcerned
about arithmetic overflow checking in Java, because I see at least two
better solutions than changing the language: either (1) using a
different language, or (2) being aware of your quantities and actually
putting some thought into your design to avoid overflow. These have been
suggested already.

I'm talking about backwards-incompatible changes in general. There's no
need to worry about "silent changes to semantics", because by definition
you cannot expect your old source to be compatible with this newest
version. You're told that, and you're told why. You can figure out what
you need to change if you intend to upgrade some codebase, compiler
warnings or no compiler warnings.

Talking about Java specifically, SE or EE, there is a great deal of
advance warning. You're typically going to have close to a decade of
warning that you need to make certain modifications, if a version at
some point introduces backwards incompatibilities. This is more than
enough time. Since changes that _are_ backwards incompatible will almost
certainly be important changes that should have been introduced much
earlier anyway, and everyone's code should use them, and it will help
everyone's code, where's the downside in forcing people to do some
necessary work? People certainly do take advantage of new APIs and
language features that _are_ backwards compatible, and they need to put
in a lot of work to do that, so how is it somehow a bad thing that
they've got to do a bit of work to upgrade a codebase into a new
backwards-incompatible version?

There are usually plenty of warnings.

But that does not make the cost go away.

And cost is bad.
What percentage of business types do you think care about new language
features and new APIs? 10 percent? 5 percent? One percent? What they
care about is can you develop their latest pet project for them. Since
we devs usually can cobble something together, albeit with difficulty,
even with ancient libraries and old language versions, that's how it
goes in real life.

Let me give you just one example; not so many months ago I had to go
through a lengthy and arduous process with one client, involving much
analysis and multiple meetings, offering assurances all the way up to
Director of IT level, just to bump up an EclipseLink minor version. To
the typical conservative client any upgrade of any library is a Big
Deal. You certainly won't see them clamouring for the speedy adoption of
a new JDK because it offers support for lambdas.

More breaking of existing code will make it even harder to get
permission to upgrade.

It is exactly to get permission to upgrade that the compatibility
is so important.
As for code working as originally intended, show me the production
application that is substantively defect-free and works as originally
intended. About the worst that will happen with an upgrade is that the
typical app will break in new ways.

Sometimes these new breakages are beneficial even. I'm minded of when we
moved a complex J2EE app from one old app server over to a newish one
from another vendor. Suddenly dozens of new defects appeared. Turned out
that all of them were real code defects and that the old app server
simply sucked that bad. My feeling is that forced adoption of better VMs
and better servers and better language features, backwards-incompatible
or no, is generally a good thing - it exposes flaws in codebases. They
didn't get _created_, they were already there.

Most/all code has bugs but migrating to a new runtime with slightly
different semantics will most likely increase the number of bugs
(the old ones will not go away and new will come).

Arne
 
G

Gene Wirchenko

[snip]
But I do know that the begin end family of languages are children
of Algol.

Block-structured languages are children of ALGOL. That includes
C et al.

Sincerely,

Gene Wirchenko
 
G

Gene Wirchenko

[snip]
The wrong decision was made in C, Java was a next language, and
the correction did not happen in Java (or a number of other next
languages for that matter).

In the context of whether to change Java or not that is utterly irrelevant.

The original decision right or wrong?

If that is so, why the language changes over the years?

Really, the changes could be made to allow for backward
compatibility.

Sincerely,

Gene Wirchenko
 
M

Martin Gregorie

On 7/10/2011 11:07 AM, Martin Gregorie wrote:
On Sun, 10 Jul 2011 10:53:09 -0400, David Lamb wrote:

On 08/07/2011 12:30 AM, Eric Sosman wrote:
On 7/7/2011 8:51 PM, Peter Duniho wrote:
[...]
I would not worry about the "simple" or "efficient" criteria.
IMHO, if one is deciding to apply overflow checking to every
computation, one has already abandoned the hope of efficiency.

I've used machines that raised overflow traps "for free,"
...
(The machines I speak of were from forty-odd years ago

When microprocessors started to arrive on the scene, a lot of
old-timey hardware folks said they'd forgotten 30+ years of hardware
design. When operating systems for computers based on said
processors came out, a lot of old-timey software folks said they'd
forgotten 30+ years of operating system design. We seem to still be
suffering the consequences.

That happened not once, but twice.

The first great leap backward was the minicomputer era, when the
likes of the PDP-8 arrived with a single user, single tasking OS
reminiscent of early computers, except they generally had teletypes
instead of banks of switches and flashing lights. By then the better
mainframes were multi- user, multitasking beasts.

Then the first microcomputers arrived in the mid/late '70s. By this
time the better minis had multi-tasking operating systems, but micros
had re- implemented the earliest mini OSes - CP/M was near as dammit
a copy of the old PDP-8 OS (RSTS?) from the late 60s - and the
earliest micros even had switches and flashing lights (KIM-1, IMSAI
8080). By 1980 the minis were running UNIX but the latest and
greatest micros had - drumroll - MS- DOS!



Only twice? Aren't you forgetting "smart" phones. One of the great
advances in Android is (Drum roll!) multitasking!!!
They don't count since, unlike minis and micros, their builders didn't
retreat to the techno-stone age, ignore progress made to date, and
build primitive OS by rubbing (metaphorical) sticks together.

AFAIK all smartphones started an a more advanced level because they
inherited better operating systems. IIRC these all originated on
electronic memo pads such as Psion, HP and Palm Pilot made, and were
all a lot more advanced than the likes of RSTS, CP/M, Flex09, etc.
Leastwise, I don't think you can consider Symbian and whatever MS was
calling the iPAQ OS at that stage any more primitive than the
contemporary versions of MacOS, OS/2 or even Windows, though admittedly
they were rather behind UNIX and its distant relations such as
OS-9/68K.

If they don't support multi-tasking I would say that they in at least
one aspect is behind the desktop OS'es.
Well, the OSen I quoted RSTS, CP/M, Flex09 and contemporaries on small
minicomputers and early microcomputers, are all single tasking, and all
had worse display handling than the smartphone OSen, because they all
were basically green screen 24x80 systems.

In my mind the improved graphical interfaces of the early smartphones
(and even on the Palm Pilots) puts the latter ahead on points, and if any
are multitasking then they're streets ahead.

IIRC first small and cheap multitasking OSes were:

- Microware's OS/9 in 1981, so precedes even the PC/DOS incarnation
of MS/DOS and would support multiple users on a 64K 6809 box

- TSC's uniFlex also ran on SWTPc 6809 boxes. Similar capability to OS/9
but not nearly as flexible or portable as OS/9

- SCO UNIX was also running on 8086 hardware around the same time -
multi-user operation on around 128 KB RAM I think

All of these appeared around the same time and all supported simultaneous
multiple users with 24x80 green screens terminals such as VT100,
Hazeltine, Beehive, etc. I think Wyse were later but I could be wrong.

(how important multitasking is on a smartphone is a different
discussion)
Agreed: apart from anything else you'd have problems using more than one
interactive app at a time on those tiny screens. In fact those early
smart phones had to have some rudimentary multitasking ability, at least
equivalent to what the early Macs could do, or the phone couldn't accept
an incoming call if its owner was using an app.

Palm Pilots were and are extremely useful despite having no multitasking
ability whatever.
 
A

Arne Vajhøj

I agree. After all, it is no worse than "ADD 1 TO AGREECOUNT GIVING
AGREECOUNT.".

I do have a serious concern with lack of either operator overloading or
complex primitives as one of the barriers to use of Java for engineering
and scientific programming.

The problem is not just the keystrokes for typing the expressions. It is
very important to be able to check that a lengthy expression in a
program is a correct translation of the corresponding expression, in
mathematical notation, in a textbook or paper.

I perfectly understand that.

And implementing basic operators for BigInteger, BigDecimal and Complex
in Java would help that group.

Or maybe they should switch to Scala. Somehow I think that group would
like Scala.

Arne
 
M

Martin Gregorie

[snip]
But I do know that the begin end family of languages are children of
Algol.

Block-structured languages are children of ALGOL. That includes
C et al.
C was the child of BCPL, a much more primitive language than Algol 60.
For instance BCPL lacked any explicit variable typing: its only variable
types were words and arrays of words. BCPL was the ancestor of all curly
bracket languages.

I have no idea what, if any, relationship there was between BCPL and Algol
60. BCPL was later (1966 vs 1960) but as quick scan of the Wikipedia
article found no references to Algol, my guess is that there is no
connection between the two.

I once was able to read BCPL well enough to transcribe the General
Purpose Macro Generator, which was written in it, into Algol 60 - the
two languages were similar enough to make this a reasonably simple task.
Eliott 503 Algol was my first computer language.
 
A

Arne Vajhøj

On Sun, 10 Jul 2011 11:29:39 -0700, Patricia Shanahan wrote:

On 7/10/2011 11:07 AM, Martin Gregorie wrote:
On Sun, 10 Jul 2011 10:53:09 -0400, David Lamb wrote:

On 08/07/2011 12:30 AM, Eric Sosman wrote:
On 7/7/2011 8:51 PM, Peter Duniho wrote:
[...]
I would not worry about the "simple" or "efficient" criteria.
IMHO, if one is deciding to apply overflow checking to every
computation, one has already abandoned the hope of efficiency.

I've used machines that raised overflow traps "for free,"
...
(The machines I speak of were from forty-odd years ago

When microprocessors started to arrive on the scene, a lot of
old-timey hardware folks said they'd forgotten 30+ years of hardware
design. When operating systems for computers based on said
processors came out, a lot of old-timey software folks said they'd
forgotten 30+ years of operating system design. We seem to still be
suffering the consequences.

That happened not once, but twice.

The first great leap backward was the minicomputer era, when the
likes of the PDP-8 arrived with a single user, single tasking OS
reminiscent of early computers, except they generally had teletypes
instead of banks of switches and flashing lights. By then the better
mainframes were multi- user, multitasking beasts.

Then the first microcomputers arrived in the mid/late '70s. By this
time the better minis had multi-tasking operating systems, but micros
had re- implemented the earliest mini OSes - CP/M was near as dammit
a copy of the old PDP-8 OS (RSTS?) from the late 60s - and the
earliest micros even had switches and flashing lights (KIM-1, IMSAI
8080). By 1980 the minis were running UNIX but the latest and
greatest micros had - drumroll - MS- DOS!



Only twice? Aren't you forgetting "smart" phones. One of the great
advances in Android is (Drum roll!) multitasking!!!

They don't count since, unlike minis and micros, their builders didn't
retreat to the techno-stone age, ignore progress made to date, and
build primitive OS by rubbing (metaphorical) sticks together.

AFAIK all smartphones started an a more advanced level because they
inherited better operating systems. IIRC these all originated on
electronic memo pads such as Psion, HP and Palm Pilot made, and were
all a lot more advanced than the likes of RSTS, CP/M, Flex09, etc.
Leastwise, I don't think you can consider Symbian and whatever MS was
calling the iPAQ OS at that stage any more primitive than the
contemporary versions of MacOS, OS/2 or even Windows, though admittedly
they were rather behind UNIX and its distant relations such as
OS-9/68K.

If they don't support multi-tasking I would say that they in at least
one aspect is behind the desktop OS'es.
Well, the OSen I quoted RSTS, CP/M, Flex09 and contemporaries on small
minicomputers and early microcomputers, are all single tasking, and all
had worse display handling than the smartphone OSen, because they all
were basically green screen 24x80 systems.

WP7 was introduces last year without real multitasking.

Arne
 
J

John B. Matthews

Gene Wirchenko said:
[snip]
But I do know that the begin end family of languages are children of
Algol.

Block-structured languages are children of ALGOL. That includes C et
al.

"In its semantic structure Scheme is as closely akin to Algol 60 as to
early Lisps. Algol 60 … lives on in the genes of Scheme and
Pascal."—Alan J. Perlis, Structure and Interpretation of Computer
Programs, Forward:

<http://mitpress.mit.edu/sicp/full-text/book/book.html>
 
H

Henderson

Agreed: apart from anything else you'd have problems using more than one
interactive app at a time on those tiny screens. In fact those early
smart phones had to have some rudimentary multitasking ability, at least
equivalent to what the early Macs could do, or the phone couldn't accept
an incoming call if its owner was using an app.

A lot of phone apps save their exact state when you use the phone's menu
button to close them and return to the phone's menus. The effect for
many users is similar to true multitasking, in that they can leave work
in progress in one app, switch to another, and return to the first
afterward and continue where they left off. It's just they can't have a
background job grinding away while they do something else; if they have,
say, something rendering an animation and switch to another app the
render makes no progress when they're not in the rendering app.

I think later-generation phones are starting to introduce the ability to
have daemon threads.

One irritation with phone apps saving their state is that they can get
wedged and be difficult to unwedge. The Safari browser on the iPhone is
a frequent culprit. There's an obscure reset procedure for the iPhone
that involves powering it off for 15 seconds and doing some magic dance,
maybe not in that order, that will reset apps to their installed states.
You lose work in progress but if Safari, or another app, got b0rked it
will work again. Unfortunately you can't reset just the b0rked app.
There's a still more severe reset that wipes the phone to factory state;
if you do that, better have synched it with your iTunes or you've lost
everything on the phone, possibly including paid apps.
 
M

Martin Gregorie

A lot of phone apps save their exact state when you use the phone's menu
button to close them and return to the phone's menus. The effect for
many users is similar to true multitasking, in that they can leave work
in progress in one app, switch to another, and return to the first
afterward and continue where they left off. It's just they can't have a
background job grinding away while they do something else; if they have,
say, something rendering an animation and switch to another app the
render makes no progress when they're not in the rendering app.
That all sounds remarkably like a Palm Pilot: no multi-tasking, instant
focus switch, state preserved for all tasks.
One irritation with phone apps saving their state is that they can get
wedged and be difficult to unwedge.
In that cast the Palm Pilot wins: I don't think mine, an ancient
monochrome M100, has ever gotten wedged.
 
M

Martin Gregorie

I agree. After all, it is no worse than "ADD 1 TO AGREECOUNT GIVING
AGREECOUNT.".
But if its critical that a number doesn't overflow, adding an "ON SIZE
ERROR" clause to that sentence will catch arithmetic overflow and also
check that the result fits in the receiving field while giving the
programmer the ability specify the corrective action.
 
A

Andreas Leitgeb

And "not impossible" != "a satisfying alternative".
No, painful is realizing that your attempts to make something work all
fail because you need to put together two libraries that need different
versions of the same library. Next step, writing a script to extract the
necessary symbols from library v2 but not in v1, dump those out as
assembly, and copy them over.

I'm not sure which relation to the current discussion you might have had
in mind for posting your example, but having to resort to some wordy
prefix-notation instead of infix-notation, may indeed just be comparable
to fiddling with native machine code libraries compared to jar-files.
 
A

Andreas Leitgeb

Patricia Shanahan said:
After all, it is no worse than "ADD 1 TO AGREECOUNT GIVING AGREECOUNT.".

All Rejoice! There's still a language (like COBOL) that makes simple
expressions more verbose. That really should keep us all happy about
Java. (NOT!)

[about lack of operator overloading for non-primitive arithmetic types]
The problem is not just the keystrokes for typing the expressions.
It is very important to be able to check that a lengthy expression
in a program is a correct translation of the corresponding expression,
in mathematical notation, in a textbook or paper.

Lew?
 
L

lewbloch

Patricia Shanahan said:
After all, it is no worse than "ADD 1 TO AGREECOUNT GIVING AGREECOUNT."..

All Rejoice! There's still a language (like COBOL) that makes simple
expressions more verbose.  That really should keep us all happy about
Java.  (NOT!)

[about lack of operator overloading for non-primitive arithmetic types]
The problem is not just the keystrokes for typing the expressions.
It is very important to be able to check that a lengthy expression
in a program is a correct translation of the corresponding expression,
in mathematical notation, in a textbook or paper.

Lew?

Yes?
 
A

Andreas Leitgeb

lewbloch said:

So here are arguments (admittedly not mine) that include ("... not just...")
but also go beyond the complaint about the number of keystrokes. I was just
wondering, if you had any expert-opinion about them, that you'd care to share.
 
D

David Lamb

[snip]
But I do know that the begin end family of languages are children
of Algol.

Block-structured languages are children of ALGOL. That includes
C et al.

I don't think "child" is quite the right word -- "inspiration" perhaps.
The first papers about CPL, precursor to BCPL/B/C/C++/Java, came out
in 1963, so definitely several years after Algol 58, but it didn't have
"block structure" in the Algol sense where you could declare procedures
within procedures ad nauseum. I think it's fair to call Pascal and Ada
children of Algol.
 
D

David Lamb

agreeCount = agreeCount.plus(AgreeCount.ONE)
on painfulness of non-primitive math.

or if you think ++ is common enough:
agreeCount.increment();
and generally
agreeCount.add(someOtherAgreeCount)
and if you decide to have your operators return "this"
ac0.mul(ac1).plus(ac2)) // ac0 = ac0*ac1+ac2
or
ac = (new AgreeCount(ac0)).mul(ac1).plus(ac2) // ac = ac0*ac1+ac2
 
D

David Lamb

I'm not sure which relation to the current discussion you might have had
in mind for posting your example, but having to resort to some wordy
prefix-notation instead of infix-notation, may indeed just be comparable
to fiddling with native machine code libraries compared to jar-files.

I have done all of those things, and for me, having to deal with method
calls instead of infix (while annoying) is nowhere near as painful as
the library comparison. Plus IMHO getting operator overloading *right*
in a language isn't exactly trivial. It's not horrible, but not trivial
either.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,007
Latest member
obedient dusk

Latest Threads

Top