Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded

C

Colin Paul Gloster

I post from to .

Stephen A. Leake wrote in on
for the benefit of Mike Silva:

"[..]

[..] FPGA development relies heavily on simulation,
which does not require real hardware."


A warning to Mike Silva: supposed simulators for languages chiefly
used for FPGAs often behave differently to how source code will
actually behave when synthesized (implemented) on a field programmable
gate array. This is true of cheap and expensive tools promoted as
simulators.


Stephen A. Leake wrote in on
:

"[..]

Someone who can do both Ada and VHDL would be a very valuable person!"


A very euphemistic approach to stating that someone who could not do
one of Ada and VHDL after a few days with literature and tools, is
probably not good at the other of VHDL and Ada.


In timestamped Sat, 24 Feb 2007 22:10:22 GMT, "Dr. Adrian Wrigley"
<[email protected]> posted:
I'm always surprised that VHDL engineers are not more open to Ada given
how close the syntax is. The standard joke where I work is that VHDL is
just like Ada except the capslock is always stuck on and comments are
apparently forbidden ;)"


Just as one can find expert C++ programmers who lament what C++ code
from typical C++ programmers and typical C++ education are like, one
can find expert VHDL interlocutors who are not fond of typical VHDL
users. Such people who are so incompetent with VHDL are incompetent
with it because they are forced to use VHDL and they can get away with
not being good at it, so they are not likely to really want to use Ada either.


"I came to Ada from VHDL. When I first encountered VHDL, my first though
was "Wow! You can say what you mean clearly". Features like user
defined types (ranges, enumerations, modular types,"


VHDL does not have Ada 2005's and Ada 95's mod types (
WWW.AdaIC.com/standards/05rm/html/RM-3-5-4.html
), unless I am mistaken.


" multi-dimensional
arrays) gave a feeling of clarity and integrity absent from software
development languages."


Apparently for many years, VHDL subset tools (the only kind of VHDL
tools which exist, so even if the language was excellent one would
need to restrict one's code to what was supported) used to not support
multi-dimensional arrays.


"So when I found that you could get the same benefits of integrity
in software development from a freely available compiler, it didn't
take long to realize what I'd been missing! Ada is without doubt
the language at the pinnacle of software engineering, and infinitely
preferable to Pascal, C++ or Modula 3 as a first language in teaching."


Ada is good in the relative sense that it is less bad than something
which is worse than Ada, which is in no way similar to an absolute
statement that Ada is not bad. Unfortunately Ada (including VHDL)
allows

procedure Ada_Is_As_Unsafe_As_VHDL_And_GNAT_Will_Not_Even_Warn is
value,Y:Integer;
begin
if False then
Value:=0;
end if;
Y:=Value;
end Ada_Is_As_Unsafe_As_VHDL_And_GNAT_Will_Not_Even_Warn;
--GNAT is an abbreviation of "the GNU Ada Translator".

I do not know Modula-3 and Oberon. Do they allow such a trivially
detectable accident?


Dr. Adrian Wrigley wrote:

"But I have ever since wondered why the VHDL and Ada communities
are so far apart."


Sometimes they are not as ignorant of each other as they might
typically be. E.g. Ada 95's and the latest VHDL's permissiveness of
reading things of mode out may be just a coincidence, unlike the
syntax for protected types in IEEE Std 1076-2002 which is copied from
ISO/IEC 8652:1995 (however the semantics are not fully copied,
e.g. mutual exclusion is ridiculously stipulated in VHDL for pure
functions on a common protected object). It is true that the
communities are not always close, e.g. shared variables were
introduced in IEEE Std 1076-1993 and were very bad (but at least even
IEEE Std 1076-1993 contained an explicit warning against them). In my
experience, a person who has specialized in VHDL has heard of Ada but
has not been interested to read anything about Ada except in a brief
history of how VHDL had been started.


" It seems like such a natural partnership for
hardware/software codevelopment."


A thing which may be perceived to be obvious to one might not be perceived to
be such by another. For example, from Jamal Guennouni, "Using Ada as a
Language for a CAD Tool Development: Lessons and Experiences",
Proceedings of the fifth Washington Ada symposium on Ada WADAS 1988:

"[..]

[..] On the other hand, our approach for hardware
description is also different from the one taken by Shahdad.
Indeed, we have stuck to the Ada language whereas Shahdad
[20] has developed a new language based on Ada called
VHDL (a part of the VHSIC program) dedicated to
hardware description and simulation. Moreover, we have
used the same language (i.e., Ada) for hardware and
software simulations whereas the VHDL language is used
for hardware descriptions only. This has several drawbacks
since it sets up a border between a software designer and a
circuit designer. It cannot benefit from the advantages connected
with the use of a single language during the whole
design process (e.g., use of the existing development and
debugging support tools).

[..]"

Don't get excited, Jamal Guennouni was writing about Ada for hardware
much as VHDL had been originally intended for hardware - that is: not
for automatic generation of a netlist. However, work on subsets for
synthesizable Ada has been expended. Now you may become excited, if
you are willing to allow yourself the risk of becoming disappointed,
after all if work was published on this in the 1980s then why did
WWW-users.CS.York.ac.UK/~mward/compiler/
get a publication in 2001 without citing nor even mentioning any of
the earlier relevant works of synthesizable Ada projects?


Dr. Adrian Wrigley wrote:

" And there is significant scope
for convergence of language features - fixing the niggling and
unnecessary differences too."


That would be nice. However complete uniformity shall not happen if
things which are currently compatible are to maintain
compatibility. Does it really make sense to replace one
incompatibility with compatibility and at the same time replace a
different compatibility with an incompatibility? E.g. VHDL differs
from Ada in not allowing to specify an array's dimensions as
follows: array (address_type'range) if address_type is an INTEGER
(sub)type (making array(address_type) legal) but if address_type is a
(sub)type of IEEE.numeric_std.(un)signed it is legal to have
array(address_type'range) but it means something different because
VHDL (un)signed is an array type (making array(address_type) illegal).

Should Ada's attribute 'First refer to the lower bound of the index of
a type based on IEEE.numeric_std.(un)signed or to the lower bound of
the numbers which can be interpreted as being represented by such an
array? It is impossible to choose one or the other without a resulting
incompatibility.

In Ada 2005 and Ada 95, a mod type which represents a whole
number is treated not as an array type when applying attributes,
but in VHDL, IEEE.numeric_std.unsigned and IEEE.numeric_std.unsigned
are treated as array types when applying attributes, even though they
may be treated as whole numbers in other contexts. Mod types and
IEEE.numeric_std.(un)signed are used for logical either true or false
operators.

Even in Ada, an array of Ada Booleans is not identical to an Ada mod
type, and even in VHDL, IEEE.numeric_std.unsigned and INTEGER are not
identical, so why should a huge amount of effort be spent to integrate
similar or other aspects of Ada and VHDL?


" Physical types,"


Why bother?


" reverse ranges,"


Stephen A. Leake responded in :

"[..] reverse ranges make things ambiguous, especially for slices of
unconstrainded arrays. So I don't want to see those in Ada."


Maybe. From the VHDL standard:
"[..]
[..] the following two block configurations are equivalent:
for Adders(31 downto 0) ... end for;
for Adders(0 to 31) ... end for;
[..]
Examples:
variable REAL_NUMBER : BIT_VECTOR (0 to 31);
[..]
alias MANTISSA : BIT_VECTOR (23 downto 0) is REAL_NUMBER (8 to 31);
-- MANTISSA is a 24b value whose range is 23 downto 0.
-- Note that the ranges of MANTISSA and REAL_NUMBER (8 to 31)
-- have opposite directions. A reference to MANTISSA (23 downto 18)
-- is equivalent to a reference to REAL_NUMBER (8 to 13).
[..]"

It is true that an illegal description can result by mixing up
directions, but could you please give a concrete example of how
directions can be ambiguous? The VHDL attribute 'RANGE returns
"The range A'LEFT(N) to A'RIGHT(N) if the Nth index range
of A is ascending, or the range A'LEFT(N) downto
A'RIGHT(N) if the Nth index range of A is descending", and similarly
'REVERSE_RANGE returns "The range A'RIGHT(N) downto A'LEFT(N) if the
Nth index range of A is ascending, or the range A'RIGHT(N) to
A'LEFT(N) if the Nth index range of A is descending." Similarly:
"NOTES
1 - The relationship between the values of the LEFT, RIGHT, LOW, and
HIGH attributes is expressed as follows:


Ascending range Descending range
T'LEFT = T'LOW T'HIGH
T'RIGHT = T'HIGH T'LOW"


Dr. Adrian Wrigley wrote:

"configurations, architectures,"


It is true that we miss these in dialects of Ada not called VHDL, but
we can cope. An interesting recent post on how to not bother with the
binding of a component instance to a design entity of an architecture by
a configuration specification without sacrificing the intended benefit
of configurations and architectures is


Dr. Adrian Wrigley wrote:

" defered constants"


Ada 2005 does allow deferred constants, so I am unsure as to what
improvement Dr. Wrigley wants for this area. Perhaps he would like to
be able to assign the value with := in the package body like in VHDL,
which is not allowed in Ada 2005. Perhaps Ada vendors would be willing to
make Ada less like C++ by removing exposed implementation details from
a package_declaration, but you would have been better off proposing
this before Ada 2005 was finalized.


Dr. Adrian Wrigley wrote:

" and ultra-light
concurrency come to mind from VHDL."


In what way is copying concurrency from VHDL where it is not already
present in Ada desirable?


Dr. Adrian Wrigley wrote:

" And general generics, private types,
tagged types, controlled types from Ada (does the latest VHDL have these?)"


No mainstream version of VHDL has these. Interfaces and tagged types might be
added in a future version:
WWW.SIGDA.org/Archives/NewsGroupArchives/comp.lang.vhdl/2006/Jun/comp.lang.vhdl.57450.txt


Stephen A. Leake responded:

"I haven't actually studied the additions in VHDL 2003, but I don't
think most of these Ada features make sense for VHDL. At least, if you
are using VHDL to program FPGAs.

[..]"

Why?

The latest IEEE VHDL standard, does not have these, but the IEEE VHDL
draft standard Draft IEEE P1076/D3.2, December 10, 2006 contains the
addition of subprogram and package generics but VHDL never had (and even
in the draft still does not have) generic instantiations in which the
parameter is a type (Ada83 had all of these kinds of generics). These
could be nice for FPGAs.


Dr. Adrian Wrigley wrote:

"Perhaps a common denominator language can be devised which has the
key features of both, with none of the obsolescent features,"


Perhaps. But people could continue with what they are using. From
Dr. SY Wong, "Hardware/Software Co-Design Language
Compatible with VHDL", WESCON, 1998:

"Introduction.
This Hardware/Software (hw/sw) Co-
Design Language (CDL) (ref.2) is a
small subset of ANSI/ISO Ada. It has
existed since 1980 when VHDL was
initiated and is contained in IEEE 1076
VHDL-1993 with only minor differences.
[..]"


Dr. Adrian Wrigley wrote:

" and
can be translated into either automatically?"

Stephe Leake responded:

"Why would you want to translate them into each other? The semantics
of
VHDL are _significantly_ different from Ada. A VHDL process is _not_
an Ada task.

[..]"


Perhaps for the same reasons people generate Verilog files from VHDL
files, and vice versa.


Dr. Adrian Wrigley wrote:

" Something like this
might allow a "rebranding" of Ada (i.e. a new name, with full buzzword
compliance), and would be ideal to address the "new" paradigm of
multicore/multithreaded processor software, using the lightweight
threading and parallelism absent from Ada as we know it. For those who
know Occam, something like the 'PAR' and "SEQ" constructs are missing in
Ada."


I really fail to see the relevance of multiple processors to
lightweight threading.

Apparently Verilog is used more than VHDL. Verilog apparently has very
little thought given to safe parallelism. (E.g. Jonathan Bromley on
2005 May 20th on :
"[..]

[..] Verilog's cavalier attitude to
process synchronisation (in summary: who cares?!) is a
major problem for anyone who has ever stopped to think about
concurrent programming for more than about five minutes.

[..]")
Papers on multicore topics in the near term are more likely to contain
SystemC(R) or SystemVerilog boasts. Some people do not reason. I was
recently involved in one of the European Commission's major multicore
research projects in which SystemC(R) development was supposedly
going to provide great temporal improvements, but it did not do so
(somehow, I was not allowed to highlight this). Is this a surprise?
From 4.2.1 The scheduling algorithm of "IEEE Standard SystemC(R) Language
TM
Reference Manual", IEEE Std 1666 -2005, "Approved 28 March 2006 American
National Standards Institute", "Approved 6 December 2005 IEEE-SA Standards
Board", supposedly "Published 31 March 2006" even though the Adobe
timestamp indicates 2006 March 29th, ISBN 0-7381-4870-9 SS95505:

"The semantics of the scheduling algorithm are defined in the
following subclauses.
[..]
An implementation may substitute an alternative scheme, provided the
scheduling
semantics given here are retained.
[..]
4.2.1.2 Evaluation phase
From the set of runnable processes, select a process instance and
trigger or resume
its execution. Run the process instance immediately and without
interruption up to
the point where it either returns or calls the function wait.
Since process instances execute without interruption, only a single
process instance
can be running at any one time, and no other process instance can
execute until the
currently executing process instance has yielded control to the
kernel. A process shall
not pre-empt or interrupt the execution of another process. This is
known as co-routine
semantics or co-operative multitasking.
[..]
A process may call the member function request update of a primitive
channel,
which will cause the member function update of that same primitive
channel to be
called back during the very next update phase.
Repeat this step until the set of runnable processes is empty, then go
on to the
update phase.
NOTE 1.The scheduler is not pre-emptive. An application can assume
that a
method process will execute in its entirety without interruption, and
a thread or clocked
thread process will execute the code between two consecutive calls to
function wait
without interruption.
[..]
NOTE 3.An implementation running on a machine that provides hardware
support
for concurrent processes may permit two or more processes to run
concurrently,
provided that the behavior appears identical to the co-routine
semantics defined in
this subclause. In other words, the implementation would be obliged to
analyze any
dependencies between processes and constrain their execution to match
the co-routine
semantics."

Anyone stupid enough to choose C++ deserves all the inevitable
woes. Especially as many of the involved parties did not even know C++
so allowable compilers were required to be restricted to a set of
compilers which are not conformant to the C++ standard as much of the
code was not written in genuine C++. This is not a surprise as the
Open SystemC Initiative's SystemC(R) reference implementation of the
time was written in an illegal distortion of C++.


Dr. Adrian Wrigley wrote:

"While the obscenities of C-like languages thrive with new additions
seemingly every month, the Pascal family has withered. Where is
Wirth when you need him?

(don't take it that I dislike C."


I dislike C.

" Or assembler. Both have their
legitimate place as low-level languages to get the machine code
you want. Great for hardware hacking. Lousy for big teams, complex code)

One can dream..."

C is not great for hardware hacking.
 
J

Jean-Pierre Rosen

Colin Paul Gloster a écrit :
procedure Ada_Is_As_Unsafe_As_VHDL_And_GNAT_Will_Not_Even_Warn is
value,Y:Integer;
begin
if False then
Value:=0;
end if;
Y:=Value;
end Ada_Is_As_Unsafe_As_VHDL_And_GNAT_Will_Not_Even_Warn;
--GNAT is an abbreviation of "the GNU Ada Translator".

I do not know Modula-3 and Oberon. Do they allow such a trivially
detectable accident?

Here is what AdaControl says about it:
Error: IMPROPER_INITIALIZATION: use of uninitialized variable: Value
Error: IMPROPER_INITIALIZATION: variable "value" used before
initialisation>

NB: I think this kind of analysis is for external tools, not for
compilers. Nice when compilers can provide extra warnings, but that's
not their primary job.
 
D

Dr. Adrian Wrigley

.
Dr. Adrian Wrigley wrote:

" Something like this
might allow a "rebranding" of Ada (i.e. a new name, with full buzzword
compliance), and would be ideal to address the "new" paradigm of
multicore/multithreaded processor software, using the lightweight
threading and parallelism absent from Ada as we know it. For those who
know Occam, something like the 'PAR' and "SEQ" constructs are missing in
Ada."

I really fail to see the relevance of multiple processors to
lightweight threading.

????

If you don't have multiple processors, lightweight threading is
less attractive than if you do? Inmos/Occam/Transputer was founded
on the basis that lightweight threading was highly relevant to multiple
processors.

Ada has no means of saying "Do these bits concurrently, if you like,
because I don't care what the order of execution is". And a compiler
can't work it out from the source. If your CPU has loads of threads,
compiling code with "PAR" style language concurrency is rather useful
and easy.
 
C

claude.simon

On Wed, 28 Feb 2007 17:20:37 +0000, Colin Paul Gloster wrote:

...




????

If you don't have multiple processors, lightweight threading is
less attractive than if you do? Inmos/Occam/Transputer was founded
on the basis that lightweight threading was highly relevant to multiple
processors.

Ada has no means of saying "Do these bits concurrently, if you like,
because I don't care what the order of execution is". And a compiler
can't work it out from the source. If your CPU has loads of threads,
compiling code with "PAR" style language concurrency is rather useful
and easy.

If my memory is ok Jean Pierre Rosen had a proposal :

for I in all 1 .. n loop
....
end loop;
 
G

Georg Bauhaus

Colin Paul Gloster a écrit :

In fact, Ada compilers do detect this:
Ada_Is_As_Unsafe_As_VHDL_And_GNAT_Will_Not_Even_Warn.ada:7: warning:
‘Value’ is used uninitialized in this function

You'd have to study the compiler docs, though. The warning is from the
backend and needs some switches to be turned on.
 
G

Georg Bauhaus

Colin Paul Gloster a écrit :

In fact, Ada compilers do detect this:
Ada_Is_As_Unsafe_As_VHDL_And_GNAT_Will_Not_Even_Warn.ada:7: warning:
‘Value’ is used uninitialized in this function

You'd have to study the compiler docs, though. The warning is from the
backend and needs some switches to be turned on.
 
M

Martin Thompson

Colin Paul Gloster said:
I post from to .

I'll leap in then :)
Stephen A. Leake wrote in on
for the benefit of Mike Silva:

"[..]

[..] FPGA development relies heavily on simulation,
which does not require real hardware."


A warning to Mike Silva: supposed simulators for languages chiefly
used for FPGAs often behave differently to how source code will
actually behave when synthesized (implemented) on a field programmable
gate array. This is true of cheap and expensive tools promoted as
simulators.

What do you mean by this? The VHDL I simulate behaves the same as the
FPGA, unless I do something bad like doing asynchronous design, or
miss a timing constraint. These are both design problems, not
simulation or language problems.

" multi-dimensional
arrays) gave a feeling of clarity and integrity absent from software
development languages."


Apparently for many years, VHDL subset tools (the only kind of VHDL
tools which exist, so even if the language was excellent one would
need to restrict one's code to what was supported) used to not support
multi-dimensional arrays.

What's this about "only VHDL subset" tools existing? Modelsim supports
all of VHDL... It is true that synthesis tools only support a subset of
VHDL, but a lot of that is down to the fact that turning (say) an
access type into hardware is a bit tricky.

Multi dimensional arrays have worked (even in synthesis) for years in
my experience.

<snip>

(Followup-to trimmed to comp.ang.vhdl)

Cheers,
Martin
 
D

Dmitry A. Kazakov

If you don't have multiple processors, lightweight threading is
less attractive than if you do? Inmos/Occam/Transputer was founded
on the basis that lightweight threading was highly relevant to multiple
processors.

Ada has no means of saying "Do these bits concurrently, if you like,
because I don't care what the order of execution is". And a compiler
can't work it out from the source. If your CPU has loads of threads,
compiling code with "PAR" style language concurrency is rather useful
and easy.

But par is quite low-level. What would be the semantics of:

declare
Thing : X;
begin
par
Foo Thing);
and
Bar Thing);
and
Baz Thing);
end par;
end;
 
C

Colin Paul Gloster

In timestamped
Thu, 01 Mar 2007 14:07:52 +0100, Georg Bauhaus <[email protected]>
posted:
Colin Paul Gloster a =C3=A9crit :
[..]

In fact, Ada compilers do detect this:
Ada_Is_As_Unsafe_As_VHDL_And_GNAT_Will_Not_Even_Warn.ada:7: warning:
=E2=80=98Value=E2=80=99 is used uninitialized in this function

You'd have to study the compiler docs, though. The warning is from the
backend and needs some switches to be turned on.

[..]"


I admit that I was completely unaware of this. I tried all of these
switches of GNAT's together on that example without getting an
appropiate warning for the problem: -gnatf ("Full errors. Verbose
details, all undefined references" according to gnatmake if invoked
with no arguments) -gnatv ("Verbose mode. Full error output with
source lines to stdout"); -gnatVa ("turn on all validity checking
options"); -gnatwh ("turn on warnings for hiding variable"); and
-gnatwa ("turn on all optional warnings (except d,h,l)", thereby not
enabling warnings for "turn on warnings for implicit dereference" and
"turn on warnings for missing elaboration pragma").
 
C

Colin Paul Gloster

In timestamped Thu, 01 Mar 2007 13:23:08
+0000, Martin Thompson <[email protected]> posted:
Colin Paul Gloster said:
I post from to .

I'll leap in then :)"

Welcome!


"> Stephen A. Leake wrote in news:news:[email protected] on
for the benefit of Mike Silva:

"[..]

[..] FPGA development relies heavily on simulation,
which does not require real hardware."


A warning to Mike Silva: supposed simulators for languages chiefly
used for FPGAs often behave differently to how source code will
actually behave when synthesized (implemented) on a field programmable
gate array. This is true of cheap and expensive tools promoted as
simulators.

What do you mean by this? The VHDL I simulate behaves the same as the
FPGA, unless I do something bad like doing asynchronous design, or
miss a timing constraint."

Or if you use the enumeration encoding attribute and it is not supported
by both the simulation tool and the synthesis tool; or if you use a
simulation tool which honors the semantics of mode BUFFER but use a
synthesis tool which simply and incorrectly replaces BUFFER with OUT
even if you are supposedly using an earlier version of VHDL in which
BUFFER and OUT are not very similar; a global name might not be
reevaluated if the global value changes even if a function depends on
it (hey, I am not saying it is a good idea, but Synopsys does mention
it as an example of something which can cause a simulation mismatch);
'Z' not being treated as high impedance by a synthesis tool; default
values being ignored for synthesis; TRANSPORT and AFTER being ignored
for synthesis (that TRANSPORT is not amenable to being implemented by
technologies typically targetted by HDLs is not a valid excuse: in
such cases, a tool should report that it is unsupported instead of
ignoring it); sensitivity lists being ignored for synthesis; or other
discrepancies.

" These are both design problems, not
simulation or language problems."

It is true that asynchronous designs or missed timing constraints may
be design problems, but I would prefer that if the synthesized
behavior is bad, that the simulation (which is supposed to indicate
what will happen in reality) would not run well, or that the
simulation would be bad in the same way that the synthesized behavior
is. This may be too much to expect for timing constraints, but I -- perhaps
naively -- do not see why an asynchronous design should be so dismissable.
How hard could it be to replace tools' warnings that a referenced signal
needs to be added to a sensitivity list with a rule in the language standard
which makes the omission from the sensitivity list illegal?

" multi-dimensional
arrays) gave a feeling of clarity and integrity absent from software
development languages."


Apparently for many years, VHDL subset tools (the only kind of VHDL
tools which exist, so even if the language was excellent one would
need to restrict one's code to what was supported) used to not support
multi-dimensional arrays.

What's this about "only VHDL subset" tools existing? Modelsim supports
all of VHDL..."

You may rightly deem that claim of mine to be unwarranted, but outside
of testbenches, I do not see what use the language is if it is not
transferrable to actual hardware.

" It is true that synthesis tools only support a subset of
VHDL, but a lot of that is down to the fact that turning (say) an
access type into hardware is a bit tricky."

That is true, and even if not tricky, it may be pointless. However, I
think that people have become accustomed to this as an excuse even
when it is not valid, e.g. from Synopsys's documentation:
"[..]
Array ranges and indexes other than integers are unsupported.
[..]"

Martin J. Thompson wrote:

"Multi dimensional arrays have worked (even in synthesis) for years in
my experience.

[..]"

Not always, and not with all tools. E.g. last month, someone
mentioned in
: "Using 2D-Arrays as I/O signals _may_ be a problem for some synthesis
tools. [..]"

I admit my next example is historical, but Table 7.1-1 Supported and
Unsupported Synthesis Constructs of Ben Cohen's second (1998) edition of
"VHDL Answers to Frequently Asked Questions" contains:
"[..]
[..] multidimensional arrays are not allowed
[..]"

Cheers,
C. P. G.
 
C

Colin Paul Gloster

In timestamped Thu, 01 Mar 2007 11:22:32 GMT, "Dr. Adrian Wrigley"
<[email protected]> posted:
"On Wed, 28 Feb 2007 17:20:37 +0000, Colin Paul Gloster wrote:
....
Dr. Adrian Wrigley wrote:

" Something like this
might allow a "rebranding" of Ada (i.e. a new name, with full buzzword
compliance), and would be ideal to address the "new" paradigm of
multicore/multithreaded processor software, using the lightweight
threading and parallelism absent from Ada as we know it. For those who
know Occam, something like the 'PAR' and "SEQ" constructs are missing in
Ada."

I really fail to see the relevance of multiple processors to
lightweight threading.

????

If you don't have multiple processors, lightweight threading is
less attractive than if you do?"

I was thinking that heavyweight processes -- whatever that term might mean,
maybe involving many processes, each of which is working on processing
intensive work without threads' unrestricted access to shared memory --
would be suitable for multiple processors.

" Inmos/Occam/Transputer was founded
on the basis that lightweight threading was highly relevant to multiple
processors."

I reread a little of occam2 and transputers for this post. I do not
know much about them. I do not know.

"Ada has no means of saying "Do these bits concurrently, if you like,
because I don't care what the order of execution is"."

How do you interpret part 11 of Section 9: Tasks and Synchronization
of Ada 2005? (On
WWW.ADAIC.com/standards/05rm/html/RM-9.html#I3506
: "[..]

NOTES
11 1 Concurrent task execution may be implemented on multicomputers,
multiprocessors, or with interleaved execution on a single physical
processor. On the other hand, whenever an implementation can determine
that the required semantic effects can be achieved when parts of the
execution of a given task are performed by different physical
processors acting in parallel, it may choose to perform them in this way.

[..]")

Dr. Adrian Wrigley wrote:

" And a compiler
can't work it out from the source. If your CPU has loads of threads,
compiling code with "PAR" style language concurrency is rather useful
and easy.
 
R

Ray Blaak

Dmitry A. Kazakov said:
But par is quite low-level. What would be the semantics of:

declare
Thing : X;
begin
par
Foo Thing);
and
Bar Thing);
and
Baz Thing);
end par;
end;

Well, that depends on the definitions of Foo, Bar, and Baz, of course :).
 
G

Georg Bauhaus

In timestamped
Thu, 01 Mar 2007 14:07:52 +0100, Georg Bauhaus <[email protected]>
In fact, Ada compilers do detect this:
Ada_Is_As_Unsafe_As_VHDL_And_GNAT_Will_Not_Even_Warn.ada:7: warning:
=E2=80=98Value=E2=80=99 is used uninitialized in this function

You'd have to study the compiler docs, though. The warning is from the
backend and needs some switches to be turned on.

[..]"


I admit that I was completely unaware of this. I tried all of these
switches of GNAT's ...

Actually these are backend switches (although being documented in
the front end gnat_ug, look for "unitialized"). You need optimization
on and -Wuninitialized. Same report for C input:

int C_Is_As_Unsafe_As_VHDL_And_GCC_Will_Not_Even_Warn()
{
int Value,Y;

if (0) {
Value=0;
}
Y=Value;
}

C_Is_As_Unsafe_As_VHDL_And_GCC_Will_Not_Even_Warn.c:8: warning: ‘Value’ is used uninitialized in this function
 
D

Dr. Adrian Wrigley

But par is quite low-level. What would be the semantics of:

declare
Thing : X;
begin
par
Foo Thing);
and
Bar Thing);
and
Baz Thing);
end par;
end;

Do Foo, Bar and Baz in any order or concurrently, all accessing Thing.

Roughly equivalent to doing the same operations in three separate
tasks. Thing could be a protected object, if concurrent writes
are prohibited. Seems simple enough!

I'm looking for something like Cilk, but even the concurrent loop
(JPR's for I in all 1 .. n loop?) would be a help.
 
M

Marcus Harnisch

Georg Bauhaus said:
C_Is_As_Unsafe_As_VHDL_And_GCC_Will_Not_Even_Warn.c:8: warning:
¡Value¢ is used uninitialized in this function

To be fair, of the examples posted, only in C the behavior is
actually undefined.

In VHDL and, I suppose Ada as well(?), the variable *does* have an
initial value. It's not entirely obvious but at least you will
*always* get consistent results.

Regards,
-- Marcus
note that "property" can also be used as syntaxtic sugar to reference
a property, breaking the clean design of verilog; [...]

-- Michael McNamara
(http://www.veripool.com/verilog-mode_news.html)
 
M

Martin Thompson

Colin Paul Gloster said:
"> Stephen A. Leake wrote in news:news:[email protected] on
for the benefit of Mike Silva:

"[..]

[..] FPGA development relies heavily on simulation,
which does not require real hardware."


A warning to Mike Silva: supposed simulators for languages chiefly
used for FPGAs often behave differently to how source code will
actually behave when synthesized (implemented) on a field programmable
gate array. This is true of cheap and expensive tools promoted as
simulators.

What do you mean by this? The VHDL I simulate behaves the same as the
FPGA, unless I do something bad like doing asynchronous design, or
miss a timing constraint."

Or if you use the enumeration encoding attribute and it is not supported
by both the simulation tool and the synthesis tool;

Well, attributes are all a bit tool specific, I'm not sure this is
important. The sim and the synth will *behave* the same, but the
numbers used to represent states might not be what you expect. Or am
I misunderstanding what you are getting at?
or if you use a
simulation tool which honors the semantics of mode BUFFER but use a
synthesis tool which simply and incorrectly replaces BUFFER with OUT
even if you are supposedly using an earlier version of VHDL in which
BUFFER and OUT are not very similar;

Never used buffers, so I dunno about that!
a global name might not be
reevaluated if the global value changes even if a function depends on
it (hey, I am not saying it is a good idea, but Synopsys does mention
it as an example of something which can cause a simulation
mismatch);

Again, can't say. But don't use Synopsys as a good example of a
high-end tool in the FPGA world :) Can you even do FPGA's with
Synopsys any more? It hasn't been bundled by Xilinx for a long while.
'Z' not being treated as high impedance by a synthesis tool;

It will be if the thing you are targetting has tristates. Either as
IOBs or internally.
default
values being ignored for synthesis;

Works in my tools.
TRANSPORT and AFTER being ignored
for synthesis (that TRANSPORT is not amenable to being implemented by
technologies typically targetted by HDLs is not a valid excuse: in
such cases, a tool should report that it is unsupported instead of
ignoring it);

Yes, it should I agree.
sensitivity lists being ignored for synthesis;

That depends on the tool.
or other
discrepancies.

Well, that covers a lot ;-)
" These are both design problems, not
simulation or language problems."

It is true that asynchronous designs or missed timing constraints may
be design problems, but I would prefer that if the synthesized
behavior is bad, that the simulation (which is supposed to indicate
what will happen in reality) would not run well, or that the
simulation would be bad in the same way that the synthesized behavior
is.

But the simulation you use for developing code is necessarily an
idealised version of life. If you want to you can run an annotated
simulation full of delays from the back end tools with the same
testbench. You may or may not find anything, as these sims are very
slow and you testbench must trigger just the right set of timing
problems.

I've never found a timing problem through running this level of simulation.
This may be too much to expect for timing constraints, but I -- perhaps
naively -- do not see why an asynchronous design should be so dismissable.
How hard could it be to replace tools' warnings that a referenced signal
needs to be added to a sensitivity list with a rule in the language standard
which makes the omission from the sensitivity list illegal?

Because it might be handy for simulating something? I dunno to be honest.

Personally I'd prefer the option to make the synthesizer flag an error
for that sort of thing.

[I am not an async expert but...] You can do async design in VHDL and
with synthesis, but proving correctness by simulation does not work
out as I understand it.
What's this about "only VHDL subset" tools existing? Modelsim supports
all of VHDL..."

You may rightly deem that claim of mine to be unwarranted, but outside
of testbenches, I do not see what use the language is if it is not
transferrable to actual hardware.

What?! "Outside of testbenches, I do not see what use..." *Inside* of
testbenches is where I spend most of my coding time! The whole point
of having a rich language is to make running simulations easy.

The fact that we has a synthesisable subset is not a bad thing, just
how real life gets in the way. That subset has grown over the years,
who knows in ten year's time, maybe the synth will be able to figure
out that the way I'm using my ACCESS type variable infers some RAM and
address logic and then that'll be another synthesiable construct.

I wish VHDL had *more* non synthesisable features
(like dynamic array sizing for example). I'd like to write my
testbenches in Python :)
" It is true that synthesis tools only support a subset of
VHDL, but a lot of that is down to the fact that turning (say) an
access type into hardware is a bit tricky."

That is true, and even if not tricky, it may be pointless. However, I
think that people have become accustomed to this as an excuse even
when it is not valid, e.g. from Synopsys's documentation:
"[..]
Array ranges and indexes other than integers are unsupported.
[..]"

Again, I wouldn't use Synposys as an example of "the best" support for
the language... Synplify doesn't state that sort of limitation in
it's list of unsupported features.
Martin J. Thompson wrote:

"Multi dimensional arrays have worked (even in synthesis) for years in
my experience.

[..]"

Not always, and not with all tools. E.g. last month, someone
mentioned in
: "Using 2D-Arrays as I/O signals _may_ be a problem for some synthesis
tools. [..]"

Well, that's a bit weak ("*may* be a problem") - what tools do they
currently not work in?
I admit my next example is historical, but Table 7.1-1 Supported and
Unsupported Synthesis Constructs of Ben Cohen's second (1998) edition of
"VHDL Answers to Frequently Asked Questions" contains:
"[..]
[..] multidimensional arrays are not allowed
[..]"

Cheers,
C. P. G.

Yes, in the past it has been a problem. But in the past, inferring a
counter was a problem! Been sorted for a while now :)

Cheers,
Martin
 
G

Georg Bauhaus

fined.

In VHDL and, I suppose Ada as well(?), the variable *does* have an
initial value.

In Ada, whether or not a variable (without an initialization
expression) has a known initial value can depend on the type.
There are types whose objects can't be created without
initialization, e.g. indefinite types or limited types with
default-initialized components. (There is some news here:
Ada 2005 adds the possibility to initialize objects of limited
types using an initialization expression. (I.e. types that
otherwise forbid assignment.))

Simple scalar types have no defined default initial value.
OTOH, when the storage bits of a variable are mapped to
hardware it might not be the best idea to have the program
flip bits as a consequence of any default initialization.
An Ada program will then specify "no initialization at all."

The issue of no default values for scalars is well known and
Ada compilers must provide additional means of testing, and
detection where possible:
A configuration pragma can be used to make the compiler
assign predictable initial values. These should
be outside the range of a variable's type (more in LRM H.1),
and must be documented.
 
D

Dmitry A. Kazakov

Do Foo, Bar and Baz in any order or concurrently, all accessing Thing.

That's the question. If they just have an arbitrary execution order being
mutually exclusive then the above is a kind of select with anonymous
accepts invoking Foo, Bar, Baz. The semantics is clean.
Roughly equivalent to doing the same operations in three separate
tasks. Thing could be a protected object, if concurrent writes
are prohibited. Seems simple enough!

This is a very different variant:

declare
Thing : X;
begin
declare -- par
task Alt_1; task Alt_2; task Alt_3;
task body Alt_1 is
begin
Foo (Thing);
end Alt_1;
task body Alt_2 is
begin
Bar (Thing);
end Alt_2;
task body Alt_3 is
begin
Baz (Thing);
end Alt_3;
begin
null;
end; -- par

If par is a sugar for this, then Thing might easily get corrupted. The
problem with such par is that the rules of nesting and visibility for the
statements, which are otherwise safe, become very dangerous in the case of
par.

Another problem is that Thing cannot be a protected object. Clearly Foo,
Bar and Baz resynchronize themselves on Thing after updating its parts. But
the compiler cannot know this. It also does not know that the updates do
not influence each other. It does not know that the state of Thing is
invalid until resynchronization. So it will serialize alternatives on write
access to Thing. (I cannot imagine a use case where Foo, Bar and Baz would
be pure. There seems to always be a shared outcome which would block them.)
Further Thing should be locked for the outer world while Foo, Bar, Baz are
running. So the standard functionality of protected objects looks totally
wrong here.
I'm looking for something like Cilk, but even the concurrent loop
(JPR's for I in all 1 .. n loop?) would be a help.

Maybe, just a guess, the functional decomposition rather than statements
could be more appropriate here. The alternatives would access their
arguments by copy-in and resynchronize by copy-out.
 
D

Dr. Adrian Wrigley

That's the question. If they just have an arbitrary execution order being
mutually exclusive then the above is a kind of select with anonymous
accepts invoking Foo, Bar, Baz. The semantics is clean.


This is a very different variant:

declare
Thing : X;
begin
declare -- par
task Alt_1; task Alt_2; task Alt_3;
task body Alt_1 is
begin
Foo (Thing);
end Alt_1;
task body Alt_2 is
begin
Bar (Thing);
end Alt_2;
task body Alt_3 is
begin
Baz (Thing);
end Alt_3;
begin
null;
end; -- par

If par is a sugar for this, then Thing might easily get corrupted. The
problem with such par is that the rules of nesting and visibility for the
statements, which are otherwise safe, become very dangerous in the case of
par.

This is what I was thinking.

Syntax might be even simpler:
declare
Thing : X;
begin par
Foo (Thing);
Bar (Thing);
Baz (Thing);
end par;

Thing won't get corrupted if the programmed knows what they're doing!
In the case of pure functions, there is "obviously" no problem:

declare
Thing : X := InitThing;
begin par
A1 := Foo (Thing);
A2 := Bar (Thing);
A3 := Baz (Thing);
end par;
return A1+A2+A3;

In the case of procedures, there are numerous reasonable uses.
Perhaps the three procedures read Thing, and output three separate files.
Or maybe they write different parts of Thing. Maybe they validate
different properties of Thing, and raise an exception if a fault is found.
Perhaps they update statistics stored in a protected object, not shown.

The most obvious case is if the procedures are called on different
objects. Next most likely is if they are pure functions
Another problem is that Thing cannot be a protected object. Clearly Foo,
Bar and Baz resynchronize themselves on Thing after updating its parts. But
the compiler cannot know this. It also does not know that the updates do
not influence each other. It does not know that the state of Thing is
invalid until resynchronization. So it will serialize alternatives on write
access to Thing. (I cannot imagine a use case where Foo, Bar and Baz would
be pure. There seems to always be a shared outcome which would block them.)
Further Thing should be locked for the outer world while Foo, Bar, Baz are
running. So the standard functionality of protected objects looks totally
wrong here.

Could Thing be composed of protected objects? That way updates
would be serialised but wouldn't necessarily block the other procedures.

Maybe the procedures are very slow, but only touch Thing at the end?
Couldn't they run concurrently, and be serialised in an arbitrary order
at the end?

Nothing in this problem is different from the issues of doing it with
separate tasks. So why is this any more problematic?

The semantics I want permit serial execution in any order. And permit
operation even with a very large number of parallel statements in
effect. Imagine a recursive call with each level having many parallel
statements. Creating a task for each directly would probably break.
Something like an FFT, for example. FFT the upper and lower halves
of Thing in parallel. Combine serially.

Exception sematics would probably differ. Any statement excepting
would stop all other par statements(?)

The compiler should be able to generate code which generates a
reasonable number of threads, depending on the hardware being used.
Maybe, just a guess, the functional decomposition rather than statements
could be more appropriate here. The alternatives would access their
arguments by copy-in and resynchronize by copy-out.

Maybe you're right. But I can't see how to glue this in with
Ada (or VHDL) semantics.
 
R

Ray Blaak

Dmitry A. Kazakov said:
If par is a sugar for this, then Thing might easily get corrupted. The
problem with such par is that the rules of nesting and visibility for the
statements, which are otherwise safe, become very dangerous in the case of
par.

Another problem is that Thing cannot be a protected object.

I am somewhat rusty on my Ada tasking knowledge, but why can't Thing be a
protected object?

It seems to me that is precisely the kind of synchronization control mechanism
you want to be able to have here.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,731
Messages
2,569,432
Members
44,832
Latest member
GlennSmall

Latest Threads

Top