Integer arithmetic in hardware descriptions

J

Jan Decaluwe

I am the author of MyHDL, a Python package that turns Python
into a hardware description language (HDL).

Integer arithmetic is very important in hardware design,
but with traditional HDLs such as Verilog and VHDL it is
complicated and confusing. MyHDL has a better solution,
inspired by Python's native integer type, int.

I have written an essay that explores these issues in
detail:

http://www.jandecaluwe.com/hdldesign/counting.html
 
J

John Nagle

Jan said:
I am the author of MyHDL, a Python package that turns Python
into a hardware description language (HDL).

Integer arithmetic is very important in hardware design,
but with traditional HDLs such as Verilog and VHDL it is
complicated and confusing. MyHDL has a better solution,
inspired by Python's native integer type, int.

I have written an essay that explores these issues in
detail:

http://www.jandecaluwe.com/hdldesign/counting.html
I went through this exercise many years ago, from a program
verification perspective, and somewhere I have an article entitled
"Type Integer Considered Harmful".

The position I took was:

Integer results should be the same on all platforms.

The basic integer type should be the "range", with an
explicit upper and lower bound. (Pascal had something
like this, called a "subrange", but it it wasn't done quite right.)

Violating the range of a variable is an error.

It is the job of the compiler to ensure that intermediate values
in integer expressions can handle all possible values given the
declared ranges of the user variables.

The last item has to do with expressions like (in Pascal-like syntax)

x,a,b,c: range[0..2^32-1];
x = (a * b) / c;

The compiler has to work out that "(a * b)" needs to be bigger than x,a,b,c.

CPython uses a "bignum" approach to escape this problem, but that has costs.
It's also not an option when compiling to hardware, as with VHDL.

I used to do proof of correctness work. See this manual, from the early 1980s.

http://www.animats.com/papers/verifier/verifiermanual.pdf

I gave up on this when C came in; the C crowd was so casual about integer
overflow that nobody cared about this level of correctness. Today, of course,
"buffer overflows" are a way of life.

This is really off topic for the group.

John Nagle
 
B

bearophileHUGS

John Nagle:
I gave up on this when C came in; the C crowd was so casual about integer overflow that nobody cared about this level of correctness. Today, of course, "buffer overflows" are a way of life.<

Experience shows that integer overflows are a very common bug. One of
the huge advantages of Python is that it frees the programmer to keep
a constant eye on the size of the numbers (so multi-precision numbers
aren't just for cryptography as I often hear people say, they are
wrong). This helps me avoid a class of bugs and program faster and in
a more relaxed way.

Some C# designers come from Pascal (where overflow is considered an
important thing), and they have added to dotnet ways to find when an
overflow occurs, globally in a program, locally in a piece of code, or
even in a single part of an expression. This is much better than
nothing.

Languages like Lisp (and some other, like some modern forths, etc)
avoid most of the trouble with tagged integers, they are a bit slower
but have the advantages of letting you forget the finite nature of
integers (and in Lisp you can often ask for a fixnum where you need
max performance anyway).

Later some people has even seen that a sufficiently smart compiler can
even remove part of the controls to find the overflows (for example
where you use a variable that you can infer will never overflow).

I'm fighting against the C/C++ crowd to add C#-like integer overflows
to the D2 language, but with not much luck so far, all they (Walter,
mostly) see is the "lower performance" it may lead.

Bye,
bearophile
 
J

JanC

Some C# designers come from Pascal (where overflow is considered an
important thing), and they have added to dotnet ways to find when an
overflow occurs, globally in a program, locally in a piece of code, or
even in a single part of an expression. This is much better than
nothing.
I'm fighting against the C/C++ crowd to add C#-like integer overflows
to the D2 language, but with not much luck so far, all they (Walter,
mostly) see is the "lower performance" it may lead.

In most "modern" Pascal dialects the overflow checks can be (locally)
enabled or disabled with compiler directives in the source code, so the
"speed issue" is not a real issue in practice...

<http://freepascal.org/docs-html/prog/progsu62.html#x69-670001.1.62>

This allows you to disable those checks in e.g. a very speed-sensitive piece
of code (that you must check for correctness yourself then of course, but
that's better than having to check *all* code).
 
B

bearophileHUGS

JanC:
In most "modern" Pascal dialects the overflow checks can be (locally)
enabled or disabled with compiler directives in the source code,

I think that was possible in somewhat older versions of Pascal-like
languages too (like old Delphi versions, and maybe TurboPascals too).

so the "speed issue" is not a real issue in practice...<

How can I help Walter (the designer and writer of the D language)
understand this? Do you have articles or other ways that support this
point of view?
Patching the D compiler to re-run programs and benchmarks with such
controls present isn't something I can do yet, probably :eek:)
(Despite there's a LLVM-based D compiler too now, named LDC, and it
already supports the LLVM intrinsics to perform overflow checks).

Bye,
bearophile
 
J

JanC

JanC:

I think that was possible in somewhat older versions of Pascal-like
languages too (like old Delphi versions, and maybe TurboPascals too).

Yeah, I think Turbo Pascal supported this since v3 or v4 at least.
(By "modern" I meant as opposed to the original Pascal & the other "classic"
Pascals that were available before Turbo Pascal came around.)
How can I help Walter (the designer and writer of the D language)
understand this? Do you have articles or other ways that support this
point of view?

"Premature optimization is the root of all evil in programming" ? ;-)
<http://c2.com/cgi/wiki?PrematureOptimization>

(And D became way too complicated over the years, so I'm not really
interested in it much anymore.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,021
Latest member
AkilahJaim

Latest Threads

Top