Vital vs. Verilog Simulation runtime

J

Jon

Hi all,
It been a long while since I've done any timing simulations. I know
back in the late 90's that the verilog simulations ran signifigantly
faster I had cases of simulations running vital taking 3/4 a day and
the same simulation running under verilog took an hour or two. Has
anything changed in the simulation area?
I am debating whether to do my timing simulations in vhdl vital or
verilog.
Any inputs would be greatly appericiated.

Thanks,

Jon
 
C

Charles Bailey

Jon said:
Hi all,
It been a long while since I've done any timing simulations. I know
back in the late 90's that the verilog simulations ran signifigantly
faster I had cases of simulations running vital taking 3/4 a day and
the same simulation running under verilog took an hour or two. Has
anything changed in the simulation area?
I am debating whether to do my timing simulations in vhdl vital or
verilog.
Any inputs would be greatly appericiated.

Thanks,

Jon
We recently completed the design of a large chip and did timing
simulation with verilog. We had been completely a VHDL shop up until we
tried to do full-chip gate-level simulations on this large chip and it
broke our simulator (ModelSim) on every system we tried. We switched to
verilog and NC-Sim for gate level timing simulations and were able to
complete the job. On a previous chip, which was about 1/2 the size, we
used ModelSim and VHDL VITAL models for timing simulation and the
performance was agonizingly slow. With verilog and NC-Sim we found the
performance for this larger chip to be acceptable. (We couldn't try
ModelSim with verilog because we didn't have a license for verilog.)
 
J

john jakson

Charles Bailey said:
We recently completed the design of a large chip and did timing
simulation with verilog. We had been completely a VHDL shop up until we
tried to do full-chip gate-level simulations on this large chip and it
broke our simulator (ModelSim) on every system we tried. We switched to
verilog and NC-Sim for gate level timing simulations and were able to
complete the job. On a previous chip, which was about 1/2 the size, we
used ModelSim and VHDL VITAL models for timing simulation and the
performance was agonizingly slow. With verilog and NC-Sim we found the
performance for this larger chip to be acceptable. (We couldn't try
ModelSim with verilog because we didn't have a license for verilog.)


Great message, since it confirms what most of us Verilog users have
been saying for years. I thought though that ModelSim was language
neutral on the inside.

Anyway there is another choice I use to get far greater speed and that
is to use C. I know I know, that anoys the heck out of the anti C
people but I write code in structured Verilog 1st for a fairly flat
hierarchy and translate that into the equiv RTL C line by line. Since
Verilog & C have a lot of similar syntax, this translation isn't too
difficult but it sure looks ugly on the C side. When the cpu design
model runs in VC6 it runs at about 1mips with most all logging turned
off on a 2400xp. I can't wait to see what the Verilog performance will
be but I expect 1 or 2 orders slower. The Verilog does go through
static timing and synthesis (Webpack)though for rough performance
estimate. Ofcourse this only really works well with single cycle pure
digital flow and I am targeting FPGA rather than ASIC.

Another choice you have is to use Verilog & the PLI with C based
transaction models (see Zaic etc) but I wouldn't dream of doing that
myself.

regards

johnjakson_usa_com
 
J

Jim Lewis

john said:
Great message, since it confirms what most of us Verilog users have
been saying for years.

The performance noted is for VHDL VITAL gate sims vs. Verilog
gate sims. It is not safe to generalize this to Verilog RTL
code vs. VHDL RTL code. I have heard that VHDL RTL code is
about the same performance as Verilog RTL code.

When I proposed that VHDL-200X ditch VITAL and
adopt Verilog gatelevel netlists, the answer from the EDA
tool guys suprised me. There is no reason that VITAL can
not be as fast as Verilog gates, it just has not been optimized
by EDA vendors.

EDA vendors need a reason to make the investment to accelerate
VITAL. The incentive comes from users. Particularly at license
renewal and new license purchase time. If you think VITAL is
important part of your design methodology, let your EDA
vendor know.


Cheers,
Jim
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Jim Lewis
Director of Training mailto:[email protected]
SynthWorks Design Inc. http://www.SynthWorks.com
1-503-590-4787

Expert VHDL Training for Hardware Design and Verification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
D

DoesntMatter

Jim said:
The performance noted is for VHDL VITAL gate sims vs. Verilog
gate sims. It is not safe to generalize this to Verilog RTL
code vs. VHDL RTL code. I have heard that VHDL RTL code is
about the same performance as Verilog RTL code.

Absolutely correct !

I once constructed VHDL gatelevel models , with timing ,
extracted from synthesis libs.
The models did basically the same as the Verilog models ,
for instance :

Z<=A and B after 3 ns ; was the model of an AND gate.

And guess : It did run as fast as Verilog gatelevels.

Why the @#§!@ would there be _significant_ difference for modelling the
same functionality ?

Jos De Laender
 
T

Tom Hawkins

(e-mail address removed) (john jakson) wrote in message \
[snip]
Great message, since it confirms what most of us Verilog users have
been saying for years. I thought though that ModelSim was language
neutral on the inside.

Anyway there is another choice I use to get far greater speed and that
is to use C. I know I know, that anoys the heck out of the anti C
people but I write code in structured Verilog 1st for a fairly flat
hierarchy and translate that into the equiv RTL C line by line. Since
Verilog & C have a lot of similar syntax, this translation isn't too
difficult but it sure looks ugly on the C side. When the cpu design
model runs in VC6 it runs at about 1mips with most all logging turned
off on a 2400xp. I can't wait to see what the Verilog performance will
be but I expect 1 or 2 orders slower. The Verilog does go through
static timing and synthesis (Webpack)though for rough performance
estimate. Ofcourse this only really works well with single cycle pure
digital flow and I am targeting FPGA rather than ASIC.

C is great for simulation, but hand translation is problematic.
(Don't know about you, but I tend to make many mistakes in the
process.)

Have you considered Verilator? I've had quite a bit of success with
it -- I used it awhile back to simulate a Verilog model inside
Simulink. The speed is good, and I know Wilson recently added several
new optimizations to further improve performance.

If you're targeting an FPGA, there's no reason not to consider
Confluence. The auto-generated C models are topologically ordered, so
you don't have to maintain a flat hierarchy. Also, there are no
restrictions on single clocks or signals greater than 32/64 bits.

Typically I write my testbenches in Python to drive the CF generated C
models. Python is makes it is easy to construct a transaction level
testbench. And because the bulk of the computation is in compiled C,
sim performance is still top notch.

-Tom
 
S

Steven Sharp

DoesntMatter said:
Why the @#§!@ would there be _significant_ difference for modelling the
same functionality ?

Because VHDL places constraints on evaluation order that limit the
optimizations that can be performed. Verilog allows the simulator
greater freedom. For example, multiple levels of zero-delay gates
can be collapsed into a single super-gate evaluation in Verilog.
VHDL requires preserving the original number of delta cycles of delay
in propagating through those levels.

Such optimizations may or may not have significant effects in actual
simulation, but they do have the potential in theory.
 
J

john jakson

(e-mail address removed) (john jakson) wrote in message \
[snip]
C is great for simulation, but hand translation is problematic.
(Don't know about you, but I tend to make many mistakes in the
process.)

Its a highly restricted verilog assign/always set so its fully
synthesizeable and not too diff to write same in C. Ofcourse the
assigns are written in an assign() {} and usually flow downwards. The
always() {} reg <= 's are written in reverse time order to preserve
equiv cycle behaviour. In both fns, wires/regs (uints really) are
always single writes. Now if only C had unlimited ints and would let
me use {,,} <= syntax, I'd be in HDL heaven.

In the past I've hacked up and maintained C => Verilog but that
reduced C HDL to a vulgar HDL. It was obvious that Verilog => C would
be much better result and that worked quite well only losing 2-5x in
raw c performance over pure C.
Have you considered Verilator? I've had quite a bit of success with
it -- I used it awhile back to simulate a Verilog model inside
Simulink. The speed is good, and I know Wilson recently added several
new optimizations to further improve performance.

I am aware of it but haven't followed the few oss EDA tools. If I were
using Linux I might be more inclined to try these out. I should pay
more attention to those that are good.

If you're targeting an FPGA, there's no reason not to consider
Confluence. The auto-generated C models are topologically ordered, so
you don't have to maintain a flat hierarchy. Also, there are no
restrictions on single clocks or signals greater than 32/64 bits.

I took a look at CF but I didn't find the syntax to my taste even if
though the other feature of generating x,y,z from it is interesting.

The other reason for using C is to directly master the ucf placement
file, just a fancy printf fn() {}. Of course any lang will do but I am
C/verilog focussed. Since I am doing cpu design ofcourse I have to do
a lot of other stuff in C like work on compilers & test programs at
the same time. When the cpu is done I will be back on my original lcc
based compiler with added verilog/occam support, but thats away off.
Typically I write my testbenches in Python to drive the CF generated C
models. Python is makes it is easy to construct a transaction level
testbench. And because the bulk of the computation is in compiled C,
sim performance is still top notch.

-Tom

Well any EE that writes SW code in any language has a leg up on pure
HW guys, esp if your a tool builder too. I think in the end that thats
more important, if the tools you build are really usefull, let them
go. I don't because I don't want the hassle of updating and docs &
support, but perhaps will in the future.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,733
Messages
2,569,439
Members
44,829
Latest member
PIXThurman

Latest Threads

Top