I'd rather switch than fight!

R

rickman

My practical experience is that strong typing creates another class of bugs,
simply by making things more complicated.  I've last seen VHDL in use more
than 10 years ago, but the typical pattern was that a designer wanted a bit
vector, and created a subranged integer instead.  Seems to be identical, but
isn't.  If you increment the subranged integer, it will stop simulation on
overflow, if you increment the bit vector, it will wrap around.  My coworker
who did this subranged integer stuff quite a lot ended up with code like

if foo = 15 then foo <= 0 else foo <= foo + 1 endif;

And certainly, all those lines had started out as

foo <= foo + 1;

and were only "fixed" later when the simulation crashed.

The good news is that the synthesis tool really generates the bitvector
logic for both, so all those simulator crashes were only false alarms.

I can't say I understand what the point is. If you want a counter to
wrap around you only need to use the mod operator. In fact, I tried
using unsigned and signed types for counters and checked the synthesis
results for size. I found that the coding style greatly influences
the result. I ended up coding with subranged integers using a mod
operator because it always gave me a good synthesis result. I never
did understand some of the things the synthesis did, but it was not
uncommon to see one adder chain for the counter and a second adder
chain for the carry out!

After I run the Verilog gauntlet this summer, I plan to start
verifying a library of common routines to use in designs when the size
is important. My current project is very tight on size and it is such
a PITA to code every line thinking of size.

Rick
 
G

glen herrmannsfeldt

(snip)
I was listening to a lecture by a college once who indicated that you
don't need to use static timing analysis since you can use a timing
based simulation! I queried him on this a bit and he seemed to think
that you just needed to have a "good enough" test bench. I was
incredulous about this for a long time. Now I realize he was just a
moron^H^H^H^H^H^H^H ill informed!

I suppose so, but consider it the other way around.

If your test bench is good enough then it will catch all static
timing failures (eventually). With static timing analysis, there
are many things that you don't need to check with the test bench.

Also, you can't do static timing analysis on the implemented logic.
(That is, given an actual built circuit and a logic analyzer.)

Now, setup and hold violations are easy to test with static
analysis, but much harder to check in actual logic. Among others,
you would want to check all possible clock skew failures, which is
normally not possible. With the right test bench and logic
implementation (including programmable delays on each FF clock)
it might be possible, though.

-- glen
 
B

Bernd Paysan

Andy said:
IMHO, they missed the point. Any design that can be completed in a
couple of hours will necessarily favor the language with the least
overhead. Unfortunately, two-hour-solvable designs are not
representative of real life designs, and neither was the contest's
declared winner.

Well, we pretty much know that the number of errors people make in
programming languages basically depends on how much code they have to write
- a language which has less overhead and is more terse is being written
faster and has less bugs. And it goes non-linear, i.e. a program with 10k
lines of code will have less bugs per 1000 lines than a program with 100k
lines of code. So the larger the project, the better the more terse
language is.
 
P

Patrick Maupin

That is certainly a great way to prove a theory.  Toss out every data
point that disagrees with your theory!

Well, I don't really need others to agree with my theory, so if that's
how it's viewed, so be it. Nonetheless, I view it as tossing out data
that was taken under different conditions than the ones I live under.
Although the basics don't change (C, C++, Java, Verilog, VHDL are all
turing-complete, as are my workstation and the embedded systems I
sometimes program on), the details can make things qualitatively
enough different that they actually appear to be quantatively
different. It's like quantum mechanics vs. Newtonian physics.

For example, on my desktop, if I'm solving an engineering problem, I
might throw Python and numpy, or matlab, and gobs of RAM and CPU at
it. On a 20 MIPS, fixed point, low-precision embedded system with a
total of 128K memory, I don't handle the problem the same way.

I find the same with language differences. I assumed your complaint
when you started this thread was that a particular language was
*forcing* you into a paradigm you felt might be sub-optimal. My
opinion is that, even when languages don't *force* you into a
particular paradigm, there is an impedance match between coding style
and language that you ignore at the peril of your own frustration.

So when somebody says " I don't change the way I code when I code in
Verilog vs. VHDL
or C vs. Java, the compiler just does a better job of catching my
stupid mistakes, allowing me to get things done faster." I just can't
even *relate* to that viewpoint. It is that of an alien from a
different universe, so has no bearing on my day to day existence.

Regards,
Pat
 
P

Patrick Maupin

Hmmm...  The date on that article is 04/07/2003 11:28 AM EDT.  Seven
years later I still don't see any sign that VHDL is going away... or
did I miss something?

True, but you also have to remember in the early 90s that all the
industry pundits thought verilog was dead...
 
B

Bernd Paysan

glen said:
If your test bench is good enough then it will catch all static
timing failures (eventually). With static timing analysis, there
are many things that you don't need to check with the test bench.

And then there are some corner cases where neither static timing analysis
nor digital simulation helps - like signals crossing asynchronous clock
boundaries (there *will* be a setup or hold violation, but a robust clock
boundary crossing circuit will work in practice).

Example: We had a counter running on a different clock (actually a VCO,
where the voltage was an analog input), and to sample it robust in the
normal digital clock domain, I grey-encoded it. There will be one bit which
is either this or that when sampling at a setup or hold violation condition,
but this is only just one bit, and it's either in the state before the
increment or after.
 
R

rickman

I suppose so, but consider it the other way around.

If your test bench is good enough then it will catch all static
timing failures (eventually).  With static timing analysis, there
are many things that you don't need to check with the test bench.

I don't follow what you are saying. This first sentence seems to be
saying that a timing simulation *is* a good place to find timing
problems, or are you talking about real world test benches? The point
is that static timing is enough to catch all timing failures given
that your timing constraints cover the design properly... and I agree
that is a big given. Your second sentence seems to be agreeing with
my previous statement.

Also, you can't do static timing analysis on the implemented logic.
(That is, given an actual built circuit and a logic analyzer.)
So?


Now, setup and hold violations are easy to test with static
analysis, but much harder to check in actual logic.  Among others,
you would want to check all possible clock skew failures, which is
normally not possible.  With the right test bench and logic
implementation (including programmable delays on each FF clock)
it might be possible, though.

In twenty years of designing with FPGAs I have never found a clock
skew problem. I always write my code to allow the clock trees to
deliver the clocks and I believe the tools guaranty that there will
not be a skew problem. Static timing actually does cover clock skew,
at least the tools I use.

BTW, how do you design a "right test bench"? Static timing analysis
will at least give you the coverage level although one of my
complaints is that they don't provide any tools for analyzing if your
constraints are correct. But I have no idea how to verify that my
test bench is testing the timing adequately.

Rick
 
R

rickman

And then there are some corner cases where neither static timing analysis
nor digital simulation helps - like signals crossing asynchronous clock
boundaries (there *will* be a setup or hold violation, but a robust clock
boundary crossing circuit will work in practice).

Example: We had a counter running on a different clock (actually a VCO,
where the voltage was an analog input), and to sample it robust in the
normal digital clock domain, I grey-encoded it.  There will be one bit which
is either this or that when sampling at a setup or hold violation condition,
but this is only just one bit, and it's either in the state before the
increment or after.


But this has nothing to do with timing analysis. Clock domain crosses
*always* violate timing and require a logic solution, not a timing
test.

Rick
 
R

rickman

Well, we pretty much know that the number of errors people make in
programming languages basically depends on how much code they have to write
- a language which has less overhead and is more terse is being written
faster and has less bugs.  And it goes non-linear, i.e. a program with 10k
lines of code will have less bugs per 1000 lines than a program with 100k
lines of code.  So the larger the project, the better the more terse
language is.


That must be why we are all programming in APL, no?

Rick
 
G

glen herrmannsfeldt

I don't follow what you are saying. This first sentence seems to be
saying that a timing simulation *is* a good place to find timing
problems, or are you talking about real world test benches? The point
is that static timing is enough to catch all timing failures given
that your timing constraints cover the design properly... and I agree
that is a big given. Your second sentence seems to be agreeing with
my previous statement.

Yes, I was describing real world (hardware) test benches.

Depending on how close you are to a setup/hold violation,
it may take a long time for a failure to actually occur.
In twenty years of designing with FPGAs I have never found a clock
skew problem. I always write my code to allow the clock trees to
deliver the clocks and I believe the tools guaranty that there will
not be a skew problem. Static timing actually does cover clock skew,
at least the tools I use.

Yes, I was trying to cover the case of not using static timing
analysis but only testing actual hardware. For ASICs, it is
usually necessary to test the actual chips, though they should
have already passed static timing.
BTW, how do you design a "right test bench"? Static timing analysis
will at least give you the coverage level although one of my
complaints is that they don't provide any tools for analyzing if your
constraints are correct. But I have no idea how to verify that my
test bench is testing the timing adequately.

If you only have one clock, it isn't so hard. As you add more,
with different frequencies and/or phases, it gets much harder,
I agree. It would be nice to get as much help as possible
from the tools.

-- glen
 
M

Martin Thompson

Bernd Paysan said:
My practical experience is that strong typing creates another class of bugs,
simply by making things more complicated. I've last seen VHDL in use more
than 10 years ago, but the typical pattern was that a designer wanted a bit
vector, and created a subranged integer instead.

Surely the designer should've used a bit vector (for example
"unsigned" type) then? That's not the language's fault!

Cheers,
Martin
 
M

Martin Thompson

Bernd Paysan said:
Well, we pretty much know that the number of errors people make in
programming languages basically depends on how much code they have to write
- a language which has less overhead and is more terse is being written
faster and has less bugs.

Citation needed :) (I happen to agree, but if you can point to good
studies, I'd be interested in reading ...)
And it goes non-linear, i.e. a program with 10k lines of code will
have less bugs per 1000 lines than a program with 100k lines of
code. So the larger the project, the better the more terse language
is.

Is that related to the terseness of the core language, or how many
useful library functions are available, so you don't have to reinvent
the wheel (and end up with a triangular one...)?

Cheers,
Martin
 
A

Andy

The cost of bugs in code is not a constant, per-bug figure. The cost
is dominated by how hard it is to find the bug as early as possible.

So, in a verbose language, the number of bugs may go up, but the cost
of fixing the bugs goes down.

Case in point: Would you suggest that using positional notation in
port maps and argument lists is more prone to cause errors? And which
is prone to cost more to find and fix any errors? My point is not that
positional notation is an advantage of one language over another, it
is simply to debunk the "fewer lines of code = better code" myth.

Don't kid yourself that cryptic one-liners are more bug free than well
documented (e.g. compiler-enforced comments) code that is more
verbose.


Andy
 
P

Paul Uiterlinden

Andrew said:
Interesting in the discussion on myHdl/testbenches, no-one raised
SystemVerilog. SystemVerilog raises the level of abstraction(like
myHdl), but more importantly it introduces constrained random
verification. For writing testbenches, SV is a better choice than
MyHdl/VHDL/Verilog, assuming tool availablility.

And assuming availability of financial means to buy licenses.
And assuming availability of time to rewrite existing verification
components, all of course written in VHDL.
And assuming availability of time to learn SV.
And assuming availability of time to learn OVM.
And ...

Oh man, is it because of the seasonal change that I'm feeling so tired, or
is it something else? ;-)
It would seem that SV does not bring much to the table in terms of RTL
design - its just a catchup to get verilog up to the capabilities that
VHDL already has.

Indeed.

I agree that SV seems to give most room for growth in the verification side.
VHDL is becoming too restrictive when you want to create really reusable
verification parts (reuse verification code from block level at chip
level). More often than not, the language is working against you in that
case. Most of the time because it is strongly typed. In general I prefer
prefer strongly typed over weakly typed. But sometimes it just gets in your
way.

For design I too do not see much advantage of SV over VHDL, especially when
you already are using VHDL. So then a mix would be preferable: VHDL for
design, SV/OVM for verification.
 
M

Martin Thompson

Andy said:
The cost of bugs in code is not a constant, per-bug figure. The cost
is dominated by how hard it is to find the bug as early as possible.

So, in a verbose language, the number of bugs may go up, but the cost
of fixing the bugs goes down.

Case in point: Would you suggest that using positional notation in
port maps and argument lists is more prone to cause errors? And which
is prone to cost more to find and fix any errors? My point is not that
positional notation is an advantage of one language over another, it
is simply to debunk the "fewer lines of code = better code" myth.

Agreed - as with all things, any extreme position is daft...

The problem (as I see it) comes with languages (or past
design-techniques enforced by synthesis tools) which are not
descriptive enough to allow you to express your intent in a
"relatively" small amount of code. Which is why assembly is not as
widely used anymore, and more behavioural descriptions win out over
instantiating LUTs/FFs by hand. It's not about the verboseness of the
language per-se, more about the ability to show (clearly) your intent
relatively concisely.

And much of the "verboseness" in VHDL can be mitigated with tools like
Emacs or Sigasi's product. And much of the other perceived
verboseness can be overcome by writing "modern" code: single process,
using variables, functions, procedures (the sort of thing some of us
do all the time!)

BTW - I write a lot of VHDL and a fair bit of Python, so I see both
sides of the language fence. (I also write a fair amount of Matlab, which
annoys me in many many ways due to the way features have been kluged
on over time, but it's sooo quick to do some things that way!). I
can't see myself moving over the Verilog either - the conciseness
doesn't seem to be the "right sort" of conciseness for me.
Don't kid yourself that cryptic one-liners are more bug free than well
documented (e.g. compiler-enforced comments) code that is more
verbose.

Indeed, I don't (kid myself that is)! You can write rubbish in any
language :)

BTW, what do you mean by "compiler-enforced comments" - is it "simply"
that code should be as self-documenting as possible? Or something else?

Cheers,
Martin
 
A

Andy

BTW, what do you mean by "compiler-enforced comments" - is it "simply"
that code should be as self-documenting as possible? Or something else?

Self-documenting and more.

I look at many aspects of a strongly typed language as encouraging or
requiring "verbosity" that would otherwise need comments to explain.
Hence we have different types for vectors of the same base element
type, based on how the contents are to be interpretted, and/or
limitations of the data contained. Integer subtypes allow you to
constrain (and verify) the contents more thoroughly, at least
numerically.

By choosing the appropriate data types, you are telling the tool and
subsequent reviewers/users/maintainers something about your code, and
how it works (and sometimes more importantly, how it doesn't or won't
work.)

By using assertion statements, you can not only document assumptions
and limitations of your code, but also ensure that those are met.

Some of these features are enforced by the compiler, some are enforced
by standards compliant implementations. But they are enforced, which
is more than we can say about comments. How many times have you seen
comments that were clearly written for a previous version of the code,
but not completely updated in later revisions to that code?

Andy
 
J

Jonathan Bromley

Some of these features are enforced by the compiler, some are enforced
by standards compliant implementations. But they are enforced, which
is more than we can say about comments. How many times have you seen
comments that were clearly written for a previous version of the code,
but not completely updated in later revisions to that code?

Hear, hear!

There is, of course, another form of enforced comment - the assertion.
Because assertions are pure observers and can't affect anything else
[*]
they have a similar status to comments - they make a statement about
what
you THINK is happening in your code - but they are *checked at
runtime*.
VHDL has permitted procedural assertions since forever; nowadays,
though,
we have come to expect more sophisticated assertions allowing us to
describe (assert) behaviours that span over a period of time.

Some of the benefits of strict data types that Andy and Martin mention
can be replicated by assertions. Others, though, are not so easy;
for example, an integer subtype represents an assertion on _any_
attempt to write to _any_ variable of that subtype, checking that
the written value is within bounds. It would be painful to write
explicit assertions to do that. So the two ideas go hand in hand:
data types (and a few other constructs) can automate certain simple
assertions that would be tiresome to write manually; hand-written
assertions can be very powerful for describing specific behaviours
at points in the code where you fear that bad things might happen.

Choosing to use neither of these sanity-checking tools seems to me
to be a rather refined form of masochism, given how readily
available both are.

[*] Before anyone asks: yes, I know that SystemVerilog assertions
can have side effects. And yes, I have made use of that in real
production code, albeit only to build observers in testbenches.
The tools are there to be used, dammit.....
 
R

rickman

Yes, I was describing real world (hardware) test benches.

Depending on how close you are to a setup/hold violation,
it may take a long time for a failure to actually occur.

That is the point. Finding timing violations in a simulation is hard,
finding them in physical hardware is not possible to do with any
certainty. A timing violation depends on the actual delays on a chip
and that will vary with temperature, power supply voltage and process
variations between chips. I had to work on a problem design once
because the timing analyzer did not work or the constraints did not
cover (I firmly believe it was the tools, not the constraints since it
failed on a number of different designs). We tried finding the chip
that failed at the lowest temperature and then used that at an
elevated temperature for our "final" timing verification. Even with
that, I had little confidence that the design would never have a
problem from timing. Of course on top of that the chip was being used
at 90% capacity. This design is the reason I don't work for that
company anymore. The section head knew about all of these problems
before he assigned the task and then expected us to work 70 hour work
weeks. At least we got them to buy us $100 worth of dinner each
evening!

The point is that if you don't do static timing analysis (or have an
analyzer that is broken) timing verification is nearly impossible.

Yes, I was trying to cover the case of not using static timing
analysis but only testing actual hardware.  For ASICs, it is
usually necessary to test the actual chips, though they should
have already passed static timing.  

If you find a timing bug in the ASIC chip, isn't that a little too
late? Do you test at elevated temperature? Do you generate special
test vectors? How is this different from just testing the logic?

If you only have one clock, it isn't so hard.  As you add more,
with different frequencies and/or phases, it gets much harder,
I agree.  It would be nice to get as much help as possible
from the tools.

The number of clocks is irrelevant. I don't consider timing issues of
crossing clock domains to be "timing" problems. There you can only
solve the problem with proper logic design, so it is a logic
problem.

Rick
 
R

rickman

Agreed - as with all things, any extreme position is daft...

The problem (as I see it) comes with languages (or past
design-techniques enforced by synthesis tools) which are not
descriptive enough to allow you to express your intent in a
"relatively" small amount of code.  Which is why assembly is not as
widely used anymore, and more behavioural descriptions win out over
instantiating LUTs/FFs by hand.  It's not about the verboseness of the
language per-se, more about the ability to show (clearly) your intent
relatively concisely.

Or do you think it has to do with the fact that the tools do a better
job so that the efficiencies of using assembly language and
instantiating components is greatly reduced?

And much of the "verboseness" in VHDL can be mitigated with tools like
Emacs or Sigasi's product.  And much of the other perceived
verboseness can be overcome by writing "modern" code: single process,
using variables, functions, procedures (the sort of thing some of us
do all the time!)

I am going to give Emacs a try this summer when I have more free
time. I don't see the things you mention as being a solution because
they don't address the problem. The verboseness is inherent in VHDL.
Type casting is something that makes it verbose. That can often be
mitigated by using the right types in the right places. I never use
std_logic_vector anymore.


Rick
 
P

Patrick Maupin

The old joke about Ada is that when you get your code to compile, it's
ready to ship.  I certainly wouldn't go that far, but testing is
something you do in cooperation with static checking, not as an alternative.

GOOD static checking tools are great (and IMHO part of a testbench).
I certainly hope you're not trying to imply that the typechecking
built into VHDL is a substitute for a good model checker!

Regards,
Pat
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top