Expanding Integer Range in VHDL

A

Andy

There is a conversation going on via email about the best way to
expand the range of the current integer type in VHDL in the next
version of the LRM. So I thought I would submit some options to this
group to see what would be preferred.

Background: The current version of the LRM defines the minimum
required range implementation for the type INTEGER as -2**31 - 1 to 2
** 31 - 1 . Many tool vendors extend that by one on the negative side,
to incorporate the full 32 bit two's complement range of values that
most computer hardware supports.

The VHDL standard operators for INTEGER type all promote the results
to the full range of INTEGER, regardless of the subtypes supplied as
operands. Simulation carries and stores in memory the full 32 bit
signed value, regardless of the range of values defined by the
variable's subtype's range. Synthesis carries the full 32 bit signed
value through intermediate results, then truncates it for storage in a
variable/signal of a lesser subtype. Synthesis then optimizes out the
intermediate results bits that do not influence those bits that are
ultimately visible in registers, outputs, etc.

The way I see it, we have three options for implementing a larger
integer style quantity in VHDL:

1) We could simply extend the minimum required range of INTEGER for
compliant implementations.

2) Or, we could redefine INTEGER as a same-sized subtype of some new,
larger super-integer base type.

3) Or, we could define new base_type(s) that are larger than INTEGER.

Each of these options has side effects in usability and performance,
unless we significantly alter the strong typing mechanisms inherent in
VHDL, which I do not advocate.

I am hoping an open discussion of these side effects will lead to a
consensus for the path forward. Of course we also need to discuss what
the range of the new integer should be, since that may weigh in the
discussion of side effects, particularly performance.

Andy
 
J

Jan Decaluwe

Andy said:
There is a conversation going on via email about the best way to
expand the range of the current integer type in VHDL in the next
version of the LRM. So I thought I would submit some options to this
group to see what would be preferred.

Background: The current version of the LRM defines the minimum
required range implementation for the type INTEGER as -2**31 - 1 to 2
** 31 - 1 . Many tool vendors extend that by one on the negative side,
to incorporate the full 32 bit two's complement range of values that
most computer hardware supports.

The VHDL standard operators for INTEGER type all promote the results
to the full range of INTEGER, regardless of the subtypes supplied as
operands. Simulation carries and stores in memory the full 32 bit
signed value, regardless of the range of values defined by the
variable's subtype's range. Synthesis carries the full 32 bit signed
value through intermediate results, then truncates it for storage in a
variable/signal of a lesser subtype. Synthesis then optimizes out the
intermediate results bits that do not influence those bits that are
ultimately visible in registers, outputs, etc.

The way I see it, we have three options for implementing a larger
integer style quantity in VHDL:

1) We could simply extend the minimum required range of INTEGER for
compliant implementations.

2) Or, we could redefine INTEGER as a same-sized subtype of some new,
larger super-integer base type.

3) Or, we could define new base_type(s) that are larger than INTEGER.

Each of these options has side effects in usability and performance,
unless we significantly alter the strong typing mechanisms inherent in
VHDL, which I do not advocate.

I am hoping an open discussion of these side effects will lead to a
consensus for the path forward. Of course we also need to discuss what
the range of the new integer should be, since that may weigh in the
discussion of side effects, particularly performance.

I would strongly advocate to define an integer base type with
arbitrarily large bounds. This avoids the current situation
were designers have to use unsigned/signed artificially
when they really would like to use a large integer.

For the rationale and a prototype, see:

http://www.jandecaluwe.com/hdldesign/counting.html

Quoting from the essay:

"""
The MyHDL solution can serve as a prototype for other HDLs.
In particular, the definition of VHDL’s integer could be
changed so that its range becomes unbound instead of
implementation-defined. Elaborated designs would always
use integer subtypes with a defined range, but these would
now be able to represent integers with arbitrarily large values.
"""

For backward-compatibility, the best would probably be to
define an abstract base type with a new name. The current
integer would be become a subtype of this, but with bounds
larger than the currently required minium. Implementation-definable
bounds should be eradicated in any case.

Jan

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
J

Jan Decaluwe

Rob said:
I generally expect INTEGER to simulate quickly. To me, that implies
that the best expanded type size would be 64 bits, so that math
operations on it can still be performed natively on 64 bit processors.

I'd say that, in order to avoid breaking any current assumptions as to
the size of integer, I'd want to have a LONG_INTEGER of +/-(2**64 - 1)
size. Then the LRM could redefine INTEGER as being a subtype of
LONG_INTEGER, but tool vendors could choose whether to work with it
as a subtype implementation or to add dedicated speed-up code for it at
their discretion.

I would call this line of reasoning premature optimization, which
is the source of all evil as we know. The evil being that we are
forced to use low level types such as signed/unsigned artificially
as soon as the representation's bit width exceeds 32 or 64 bits.

Why should low level computer architecture considerations have
such a drastic influence on my modeling abstractions?

Even a language like Python used to make the difference between
"normal" and "long" integers, until they realized that it's all
artificial to users and got rid of it.

I don't see a conceptual technical difficulty in having a base
integer with arbitrarily large bounds, and simply requiring that
only subtypes with defined bounds can be used at elaboration time.
A compiler should have little difficulty to find out
which integer subtypes can be optimized for speed and how.

Jan

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
A

Andy

For backward-compatibility, the best would probably be to
define an abstract base type with a new name. The current
integer would be become a subtype of this, but with bounds
larger than the currently required minium. Implementation-definable
bounds should be eradicated in any case.

Jan

--
Jan Decaluwe - Resources bvba -http://www.jandecaluwe.com
    Python as a HDL:http://www.myhdl.org
    VHDL development, the modern way:http://www.sigasi.com
    Analog design automation:http://www.mephisto-da.com
    World-class digital design:http://www.easics.com- Hide quoted text -

- Show quoted text -

The current VHDL scalar subtyping mechanism would not permit a
compiler to opitmize performance based on the subtype range, because
that information is not available to operators and subprograms who
receive these operands/arguments. You can query an argument's base
type, but not its subtype or range thereof. Changing the scalar
subtyping mechanism for just integer would smell like a hack.

Whether or not we attempt to expand the scalar subtyping model in VHDL
is a discussion we should have, but it should be clear that preserving
backwards compatibility with the current integer type and its subtypes
would not permit this.

Andy
 
A

Andy

Why should low level computer architecture considerations have
such a drastic influence on my modeling abstractions?

In a word: simulation performance!

I understand your point about the virtues of arbitrarily sized
numerical representations, but we already have that, in the fixed
point package, with only a few shortfalls*, one of which is the
horrible simulation performance (compared to integers).

Every modeler is forced to make a choice between scalability and
performance in their model. If you want extreme scalability, you
sacrifice performance, and if you want extreme performance, you
sacrifice scalability. This will be true for as long as we are
modelling things the size of the very machines that must execute these
models (as the hardware performance of the computers running the
simulation increases, so does the complexity of the models we will be
simulating). VHDL is no different from any other modelling and
simulation language in this respect.

What I would like to do is expand the scalability of integer somewhat,
while retaining the significant performance advantage of integers over
vector based, arbitrary length representations. I believe that would
put an upper bound on the maximum size of integer to somewhere between
64 and 256 bits signed. At 256 bits signed, most real-world structures
(particularly those where performance is required) are representable,
yet the performance would easily exceed that of 4 bit sfixed.

* The fixed point packages support arithmetic completeness with one
exception: ufixed - ufixed = ufixed (why did they bother to expand the
result by one bit, but not make it signed?) Also, the lack of the
ability to override the assignment operator means that manual size
adjustment prior to storage is almost always required. While the
former does not require changes to the language, the latter certainly
does, and is a whole argument in and of itself. A performance middle
ground between integer and vector based representations could even be
staked out. We could create either an optimized version of the fixed
point package for std.bit (ala numeric_bit), or a fixed point approach
using a base type unconstrained array of integers. Both of these may
be implemented without changing the language; however, the same
restrictions on resizing prior to assignment would apply.

Andy
 
T

Tricky

In Ada, you are encouraged to define your own integer (as well as other) types.
There is a hypothetical "universal_integer" type with essentially infinite
range; it is never directly used but it forms the base type for all integer
types (which are subtypes of it with declared ranges).

With this approach, is there not a worry get get an integer equivalent
of the std_logic_arith package that is non-ieee standard but becomes a
de-facto standard?
 
A

Andy

If a value such as -2**31 is NOT part of the range of integer, then a
calculation that produces it will abort on all conforming systems

Let's be clear; a vendor can conform to the current standard AND
provide > 32 bit capacity integers. The standard requires they support
AT LEAST the almost 32 bit two's complement range.

The competitiveness (or apparent lack thereof) in the VHDL tools
business is based on the fact that none of the vendors can expect
their tool to be used to the exclusion of all others. I applaud
Cadence if they support 64 bit integers, but what good does it do them
if no other simulation or synthesis tool also supports it? Can anyone
really use such features in the real world (e.g. the world that
provides most of the users and buyers of these tools) unless they are
supported by other tools too? I suppose that if a user only uses the
64 bit integers in their testbench, and only ever uses the cadence
simulator that supports them, then he might be tempted to use it. But
most of the big customers do not limit themselves to one simulator
(hey, simulator bugs happen, and you sometimes have to switch
simulators to help find or confirm the bug).

This is a far different thing than a synthesis tool vendor adding new
features to their tool like recognizing certain existing constructs
and patterns in a particular way to create optimal implementations.
They are not doing anything that will harm compatibility with any
simulators, and if it does not work with their competitors' synthesis
tool (yet!), so much the better. To date, the most effective tool in
getting a synthesis vendor to add a feature is to tell them "your
competitor B does it." Maybe that would work with getting a synthesis
tool to adopt 64 bit integers "because the cadence simulator supports
it", but that tends to be a slow, trickle-down effect for which we
have been waiting far too long.

This is why the simplest way to get >32 bit integers is to simply
expand the minimum required range in the LRM. Every supplier knows
they need to be able to tell their customers that they are compliant
to the standard, and this would just be part of it.

However, whether the simplest way is also the best way, that is the
heart of the matter...

Andy
 
A

Andy

The concept of "unlimited" already exists with unconstrained vectors,
but the internals of VHDL prohibit assignment between different sized
subtypes of the same base array type.

The vhdl system provides for the communication of the sizes of the
vector arguments to subprograms (and those of vector ports to
entities). This information is not provided for scalar subtypes. For
example, if I have a subprogram or entity with an integer type formal
argument or port, and the user maps a natural subtyped signal to that
argument/port, I cannot tell inside the body what the range of that
signal really is.

Perhaps the concept of an unconstrained integer type could borrow the
ability that vectors have to communicate their actual range to an
unconstrained port. It would be sort of a hybrid between an scalar and
an array.

The operators for such an unconstrained integer type should expand the
results to capture the true arithmetic intent of the operator (similar
to the fixed point package, except to include unsigned - unsigned =
signed), rather than silently roll over or truncate, etc.

The assignment operators for this unconstrained integer type should
also automatically resize (with errors if numeric accuracy were
compromised) the RHS expression result to the capacity of the
constrained object on the LHS.

If we had such an "unconstrained integer type", should we expand it to
support fixed point (i.e. unconstrained integer would be a subset and
special case of unconstrained fixed point). I know ada has a fixed
point scalar type system (and a syntax for specifying precision).
Perhaps we could borrow that? I have advocated for a while that ufixed
could supplant numeric_std.unsigned quite easily.

This still leaves on the table the existing scalar integer capacity.
Given that current computer architectures are starting to support 128
or even 256 bit primitive data widths, I would expect that requiring
their support now would not hinder simulation performance excessively
on computers that would likely be used in situations requiring high
performance. A limitation to 64 bit two's complement, while maximizing
performance on a wide variety of computers, does not allow 64 bit
unsigned, which would be necessary in many applications. I say again,
even a 32 bit machine can run 256 bit integer arithmetic faster than 8
bit ufixed or unsigned.

Andy
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,904
Latest member
HealthyVisionsCBDPrice

Latest Threads

Top