I'd rather switch than fight!

P

Patrick Maupin

The benefits of a "strongly typed" language, with bounds checks, etc.
are somewhat different between the first time you write/use the code,
and the Nth time reuse and revise it. Strong typeing and bounds
checking let you know quickly the possibly hidden side effects of
making changes in the code, especially when it may have been a few
days/weeks/months since the last time you worked with it.

For this usage, a good testbench will catch more bugs and make strong
type and bounds checking redundant.
A long time ago there was a famous contest for designing a simple
circuit in verilog vs. vhdl to see which language was better. The
requirements were provided on paper, and the contestents were given an
hour or two (don't remember how long, but it was certainly not even a
day), and whoever got the fastest and the smallest (two winners)
correct synthesized circuit, their chosen language won. Verilog won
both, and I don't think vhdl even finished.

Contest details here:

http://www.see.ed.ac.uk/~gerard/Teach/Verilog/manual/Example/lrgeEx2/cooley..html
IMHO, they missed the point. Any design that can be completed in a
couple of hours will necessarily favor the language with the least
overhead. Unfortunately, two-hour-solvable designs are not
representative of real life designs, and neither was the contest's
declared winner.

Well, I think the takeaway is a lot more nuanced than that, but
whatever. Believe what you will.
If you just want to hack out the code and get it working by yourself
for the day, a weakly typed language may be the better choice for you.
If you need to be able to reuse/revise/update/extend the design over
time, more strongly typed languages are preferred.

Again, IMHO, a really good testbench will more than make up for any
perceived weaknesses in Verilog in this area. But you are free to
continue to believe that the language is really helping you.

Regards,
Pat
 
P

Patrick Maupin

There are two issues to consider. One is the relative times of writing
the codes vs debugging ie if writing took 5 hours and debugging 10
minutes (unlikely) then C still wins. Which brings the second issue:
it is very likely that the programming contest involved a "larger"
design to be finished. If I am remembering correctly RTL was  an async
reset, synchronously loadable up-down counter which is a "smallish"
project. If programming contest involved something more "involved" it
still points to the benefit of strong typing and other features of
Ada/VHDL etc.

But it's mostly academic and FPGA people who think that VHDL might
have any future at all. See, for example:

http://www.eetimes.com/news/design/columns/industry_gadfly/showArticle.jhtml?articleID=17408302

Regards,
Pat
 
A

Andrew FPGA

Interesting in the discussion on myHdl/testbenches, no-one raised
SystemVerilog. SystemVerilog raises the level of abstraction(like
myHdl), but more importantly it introduces constrained random
verification. For writing testbenches, SV is a better choice than
MyHdl/VHDL/Verilog, assuming tool availablility.

It would seem that SV does not bring much to the table in terms of RTL
design - its just a catchup to get verilog up to the capabilities that
VHDL already has.
 
J

Jonathan Bromley

Well, this isn't at all the same then.  The Verilog teams got working
designs, and the VHDL teams didn't.

It's really about time that this old VHDL-Verilog shootout nonsense
was shown up for the garbage it was.

It was set up by John Cooley many years ago. It took place in the
US where Verilog predominated, then as now, so there was a much
smaller pool of skilled VHDL users than Veriloggers at the host
event. At the time, VHDL did indeed lag badly behind in some
respects - the situation with numeric packages was chaotic, and
the design was a (very simple) counter of some kind so anyone
who wasn't fluent with the VHDL numeric packages **as accepted
by the tools in use** was doomed. And, as Andy pointed out, the
scale of the problem was so tiny that Verilog would always come
out on top - Verilog is definitely faster to write for tiny
designs; you don't need a contest to work that out.

All of this means that the "contest" was little more than a
good way to show how feeble were VHDL's **tools** at that
time. It wasn't very long before the tool vendors got their
acts together, but the shootout wasn't re-run; instead, it
became part of the folklore. It was unreliable populist
hokum then, it's out-of-date unreliable hokum now, and I'm
saddened to see it adduced in evidence so many years on.

There's a lot wrong with VHDL, for sure, but a lot right too.
Here's a sampler of a few things that have been in VHDL since
at least 1987:

Added to Verilog in 2001:
- generate
- multi-dimensional arrays
- signed vector arithmetic
- configurations (nasty in both languages!)

Added to SystemVerilog in 2003-2005:
- packages
- structs (a.k.a. records)
- non-trivial data types (e.g. arrays, structs) on ports
- enumeration types
- unresolved signals (compiler errors on multiple drivers)
- non-trivial data types as subprogram arguments
- reference arguments to subprograms
- default values for input arguments
- inquiry functions to determine the dimensionality and
bounds of an array

May possibly be added to SystemVerilog in 2012:
- parameterized subprograms (like VHDL unconstrained arrays)
- subprogram overloading

Of course, there's also a ton of stuff that Verilog can do
but VHDL can't. And it's rarely a good idea to judge any
language by a laundry-list of its feature goodies. But
it is NEVER a good idea to trivialize the discussion.
 
P

Patrick Maupin

Of course, there's also a ton of stuff that Verilog can do
but VHDL can't.  And it's rarely a good idea to judge any
language by a laundry-list of its feature goodies.

Agreed completely.
 But it is NEVER a good idea to trivialize the discussion.

Well, you have to talk to others about that. Somebody else brought up
the stupid Verilog contest, and David, apparently agreeing with some
sentiment there, said:
Another famous contest involved a C and Ada comparison. It took the Ada
more than twice as long as the C team to write their code, but it took
the C team more than ten times as long to debug their code.

To which the only sane answer was my flippant "it's not the
same" (and, of course the very first thing that's not the same is who
the purported winner was). I don't think either contest is worth a
hoot, but I do find it interesting that you found it necessary to pen
a long response to my flippant response, yet found it acceptable to
ignore the statement about the C vs. ADA contest.

Regards,
Pat
 
J

Jonathan Bromley

I do find it interesting that you found it necessary to pen
a long response to my flippant response, yet found it acceptable to
ignore the statement about the C vs. ADA contest.

I try to write on things I know something about :)

I am painfully familiar with the Cooley Verilog-vs-VHDL nonsense,
but know nothing about that C-Ada contest.

In any case, I wasn't particularly responding to you. I took an
opportunity to say something I've wanted to say for a long time
about an exceedingly faulty part of the HDL mythology.
 
D

David Brown

There are two issues to consider. One is the relative times of writing
the codes vs debugging ie if writing took 5 hours and debugging 10
minutes (unlikely) then C still wins. Which brings the second issue:
it is very likely that the programming contest involved a "larger"
design to be finished. If I am remembering correctly RTL was an async
reset, synchronously loadable up-down counter which is a "smallish"
project. If programming contest involved something more "involved" it
still points to the benefit of strong typing and other features of
Ada/VHDL etc.

The contest in question was a substantial programming project over a
longer period - weeks rather than hours. I don't remember how much time
was actually spend on debugging rather than coding, but it certainly
worked out that the Ada team were finished long before the C team.
 
D

David Brown

Agreed completely.


Well, you have to talk to others about that. Somebody else brought up
the stupid Verilog contest, and David, apparently agreeing with some
sentiment there, said:

I wasn't agreeing with the validity of the Verilog/VHDL contest,
although I suppose by not saying that, it looked like I agreed with it.
It would have been more useful if I'd given a little more detail. The
Ada / C contest was over a much longer time scale, using a real-world
project - and thus is a much more valid contest (though obviously, like
any test or benchmark, you can't apply it thoughtlessly to other contexts).

It was an indication of where the stronger typing and generally stricter
compiler and language was demonstrated to give a faster development time
in a real case. I can't say whether those results could carry over to a
comparison between VHDL and Verilog, or how much the results are the
effect of strong typing. But since VHDL is often though of as being a
similar style of language to Ada, and Verilog is similarly compared to
C, it may be of interest.

I couldn't find references to the study I was thinking of, but I found
one in a similar vain:

<http://www.adaic.org/whyada/ada-vs-c/cada_art.html#conclusion>

Of course, I haven't scoured the net looking for enough articles to give
a balanced view here. So if my comments here are of interest or use to
anyone, that's great - if not, I'll not complain if you ignore them!
 
D

David Brown

For this usage, a good testbench will catch more bugs and make strong
type and bounds checking redundant.

A testbench does not make checks redundant, for two reasons. First, the
earlier in the process that you find the mistakes, the better - its
easier to find the cause of the mistake, and it's faster to find them,
fix them, and re-compile.

Secondly, a testbench does not check everything. It is only as good as
the work put into it, and can be flawed in the same way as the code
itself. A testbench's scope for non-trivial projects is always limited
- it is not practical to test everything. If you have some code that
has a counter, your testbench may not go through the entire range of the
counter - perhaps doing so would take simulation times of years. Your
testbench will then not do bounds checking on the counter.

The old joke about Ada is that when you get your code to compile, it's
ready to ship. I certainly wouldn't go that far, but testing is
something you do in cooperation with static checking, not as an alternative.


mvh.,

David
 
S

Symon

Pat,
If your email client was less agile and performed better 'typing
checking' you wouldn't have sent this blank post.
HTH, Syms. ;-)
 
D

David Brown

Was it John McCormick's model railroad?

http://www.adaic.org/atwork/trains.html

Possibly not - the C students apparently never did deliver.

I suppose there have been many such studies through the ages. The one I
remember was a real project rather than a student project (I think it
was a large commercial company, but it may have been some sort of
government organisation).
 
P

Patrick Maupin

Pat,
If your email client was less agile and performed better 'typing
checking' you wouldn't have sent this blank post.
HTH, Syms. ;-)

Absolutely true!

But it keeps me young trying to keep up with it.

Pat
 
W

whygee

rickman said:
Hmmm... The date on that article is 04/07/2003 11:28 AM EDT. Seven
years later I still don't see any sign that VHDL is going away... or
did I miss something?

I had the same thought.

Furthermore it was about only one company who wanted to push one
technology. Bold statements followed, and... 7 years later,
VHDL and Verilog are still the Vi vs Emacs of EDA.
yg
 
R

rickman

Seatbelts may save lives, but statistically many other safety
improvements don't.  When people know that their car has air bags,
they compensate and drive less safely.  (Corner a little faster, etc.)
Enough to mostly remove the life saving effect of the air bags.

Are you making this up? I have never heard that any of the other
added safety features don't save lives overall. I have heard that
driving a sportier car does allow you to drive more aggressively, but
this is likely not actually the result of any real analysis, but just
an urban myth. Where did you hear that air bags don't save lives
after considering all?

It does seem likely that people will let down their guard and
code more sloppily knowing that the compiler will catch errors.

If you can show me something that shows this, fine, but otherwise this
is just speculation.

One of my least favorite is the Java check on variable initialization.
If the compiler can't be sure that it is initialized then it is
a fatal compilation error.  There are just too many cases that
the compiler can't get right.

I saw a warning the other day that my VHDL signal initialization "is
not synthesizable". I checked and it was appropriately initialized on
async reset, it was just complaining that I also used an
initialization in the declaration to keep the simulator from giving me
warnings in library functions. You just can't please everyone!

Then again I had to make a second trip to the customer yesterday
because of an output that got disconnected in a change and I didn't
see the warning in the ocean of warnings and notes that the tools
generate. Then I spent half an hour going through all of it in detail
and found a second disconnected signal. Reminds me of the moon
landing where there was a warning about a loss of sync which kept
happening so much it overloaded the guidance computer and they had to
land manually. TMI!

Rick
 
R

rickman

You can "call BS" all you want, but the fact that you don't change the
way you code in Verilog vs. VHDL or or C vs. Java indicates that your
experiences are antithetical to mine, so I have to discard your
datapoint.

Regards,
Pat

That is certainly a great way to prove a theory. Toss out every data
point that disagrees with your theory!

Rick
 
R

rickman

The benefits of a "strongly typed" language, with bounds checks, etc.
are somewhat different between the first time you write/use the code,
and the Nth time reuse and revise it. Strong typeing and bounds
checking let you know quickly the possibly hidden side effects of
making changes in the code, especially when it may have been a few
days/weeks/months since the last time you worked with it.

A long time ago there was a famous contest for designing a simple
circuit in verilog vs. vhdl to see which language was better. The
requirements were provided on paper, and the contestents were given an
hour or two (don't remember how long, but it was certainly not even a
day), and whoever got the fastest and the smallest (two winners)
correct synthesized circuit, their chosen language won. Verilog won
both, and I don't think vhdl even finished.

Maybe this was repeated, but the first time they tried this *NO ONE*
finished in time which is likely much more realistic compared to real
assignments in the real world. If you think it will take a couple of
hours, allocate a couple of days!

Rick
 
B

Bernd Paysan

rickman said:
People say that strong typing catches bugs, but I've never seen any
real proof of that. There are all sorts of anecdotal evidence, but
nothing concrete.

My practical experience is that strong typing creates another class of bugs,
simply by making things more complicated. I've last seen VHDL in use more
than 10 years ago, but the typical pattern was that a designer wanted a bit
vector, and created a subranged integer instead. Seems to be identical, but
isn't. If you increment the subranged integer, it will stop simulation on
overflow, if you increment the bit vector, it will wrap around. My coworker
who did this subranged integer stuff quite a lot ended up with code like

if foo = 15 then foo <= 0 else foo <= foo + 1 endif;

And certainly, all those lines had started out as

foo <= foo + 1;

and were only "fixed" later when the simulation crashed.

The good news is that the synthesis tool really generates the bitvector
logic for both, so all those simulator crashes were only false alarms.
 
G

glen herrmannsfeldt

Are you making this up? I have never heard that any of the other
added safety features don't save lives overall. I have heard that
driving a sportier car does allow you to drive more aggressively, but
this is likely not actually the result of any real analysis, but just
an urban myth. Where did you hear that air bags don't save lives
after considering all?

I believe that they still do save lives, but by a smaller
factor than one might expect. I believe the one that I saw
was not quoting air bags, but anti-lock brakes.

The case for air bags was mentioned by someone else -- that some
believe that they don't need seat belts if they have air bags.
Without seat belts, though, you can be too close to the air bag
when it deploys, and get hurt by the air bag itself. For that
reason, they now use slower air bags than they used to.

The action of anti-lock breaks has a more immediate feel while
driving, and it seems likely that many will take that into account
while driving. I believe that there is still a net gain, but
much smaller than would be expected.

-- glen
 
R

rickman

Secondly, a testbench does not check everything.  It is only as good as
the work put into it, and can be flawed in the same way as the code
itself.  

I was listening to a lecture by a college once who indicated that you
don't need to use static timing analysis since you can use a timing
based simulation! I queried him on this a bit and he seemed to think
that you just needed to have a "good enough" test bench. I was
incredulous about this for a long time. Now I realize he was just a
moron^H^H^H^H^H^H^H ill informed!

Rick
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,074
Latest member
StanleyFra

Latest Threads

Top