Complex testbench design strategy

E

Eli Bendersky

Hello,

I'm now considering the design of a rather complex testbench, and want
to consult the dwellers of this list on the best strategy to adopt.

The testbench is for a complex multi-functional FPGA. This FPGA has
several almost unrelated functions, which it makes sense to check
separately. So I'm in doubt as to how to structure my testbench. As I
see it, there are two options:

1) One big testbench for everything, and some way to sequence the
tests of the various functions
2) Separate testbenches, one for each function

Each approach has its pros and cons. Option (2) sounds the most
logical, at least at first, but when I begin pursuing it, some
problems emerge. For example, too much duplication between testbenches
- in all of them I have to instantiate the large top-level entity of
the FPGA, write the compilation scripts for all the files, and so on.
In option (1) this would have to be done only once. On the other hand,
option (2) has the clear advantage over (1) of a clean separation of
tested functionality.

I'm sure this issue haunts every verification engineer at one time or
another. What approach do you use - 1, 2 or something else, and why ?

Thanks in advance
 
N

Nicolas Matringe

Eli Bendersky a écrit :
Hello,

I'm now considering the design of a rather complex testbench, and want
to consult the dwellers of this list on the best strategy to adopt.

The testbench is for a complex multi-functional FPGA. This FPGA has
several almost unrelated functions, which it makes sense to check
separately. So I'm in doubt as to how to structure my testbench. As I
see it, there are two options:

1) One big testbench for everything, and some way to sequence the
tests of the various functions
2) Separate testbenches, one for each function

Hello
I once had the same kind of task. I ended up with one big testbench with
a very long list of generic parameters and conditional component
instantiations.
Compilation is done once and for all, you just have to maintain the
simulation scripts that set the generic parameters for each partial
simulation.

Nicolas
 
K

kennheinrich

I'm now considering the design of a rather complex testbench [...]

Nice description of a problem that, as you say, is common to
almost any decent-sized verification project.

Almost any advice I could offer would be more-or-less an
echo of the sensible, practical suggestions in Janick
Bergeron's well-known "Writing Testbenches" book.  
I think he misses a few tricks about the best way for
the upper levels of a TB to interact with its
bus-functional models (BFMs), but otherwise it's
pretty much on the money.

The first edition of that book covers both VHDL and Verilog.
The second edition adds extensive discussion of Vera and
'e'.  Note that there is yet a third version of the book
which is SystemVerilog-focused; I'm guessing you don't
want that :)
The testbench is for a complex multi-functional FPGA. This FPGA has
several almost unrelated functions, which it makes sense to check
separately.

Yes, but most of the really "interesting" bugs are likely to
be related to the interactions between those functions...
1) One big testbench for everything, and some way to sequence the
tests of the various functions

I'd certainly go for that.  Better still, build a "test harness" -
the structural guts of the TB, containing your DUT instance and
all its life-support systems such as clock and reset generators,
external component models, BFMs for testbench stimulus and
monitoring, and so on.  Take very, very great care to give that
test harness the cleanest, simplest possible interface on its
ports.  Ideally, there should be one BFM for each major
interface of the DUT, and each BFM should expose only two
record ports on the TB side - one "request" port and one
"response" port.  You don't need handshakes at that level;
BFMs and TB can both use the 'transaction attribute on those
record signals to detect each new request or response.  

The top level of this test harness is likely to be
horrible, with all the DUT-connection signals, DUT
instance and a pile of other big things instantiated too.
But you write it exactly once for the whole project.

You can now instantiate this test harness into a top-level
testbench containing only a few processes that sequence the
activity of the various BFMs.  With luck, this top-level
TB will have only a few hundred lines of code and will be
fairly easy to modify to create alternative test sequences.
If you want to exercise your blocks one at a time, you can
easily write a top-level TB that leaves most of the BFMs
idle and merely exercises just one of them.  Your top level
TB can easily randomize some of its behavior if you wish;
both the contents and the timing of the transactions (record
values) that you send to the BFMs can be adjusted by
creative use of ieee.math_real.uniform().

The devil is in the detail, of course, but something like
this works pretty well for me and it's what we recommend in
our advanced VHDL verification classes.

I think you can find examples of similar ideas in Jim Lewis's
training materials too.
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
(e-mail address removed)://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.

I also recommend a structure similar to what Jonathan describes above.
If you haven't already used the toggling transaction trick that
Jonathan mentions, I'd recommend getting a copy of Bergeron's book and
reading it through (at least the first parts and the VHDL parts). It's
well worth it. I can count on one hand those books that I sat down
with, read through from front to back, and found that they made the
light bulb go off, creating a lasting impression, and have guided me
in the years since. This was one of those books for me.

Other benefits of a single large test harness over a bunch of
individual testbenches for submodules is that it's also easy to plug
in the gate level code for final verification of the whole FPGA if you
like. I'm not sure if by "separate testbenches, one for each function"
you were planning to expose the top-level FPGA interface or just the
sub-function interface. It would be a shame to lose the ability to
verify the whole FPGA for want of a decent test harness.

One potential drawback of the mother-of-a-test-harness-monster-code,
though, can be efficiency. Lighting up all sorts of PLL/DCM and high-
speed SERDES components in simulation can be pretty slow if you get to
gate level sims, especially if you have to light up every single one
in a large FPGA just to test one of the ten subfunctions. At the RTL
level, some very creative use of abstract high level models for, say
SERDES, can speed up runtime drastically while still letting you
validate the other 99% of the logic inside the guts of the chip.

Good luck!

- Kenn
 
E

Eli Bendersky

I once had the same kind of task. I ended up with one big testbench with
a very long list of generic parameters and conditional component
instantiations.
Compilation is done once and for all, you just have to maintain the
simulation scripts that set the generic parameters for each partial
simulation.

Can you elaborate how you setup generics in runtime for partial
simulations ? I thought generics can only be set at compile/load time
and instantiations are static at runtime. This is my biggest problem
with the monolithic testbench approach.

Eli
 
E

Eli Bendersky

I'm now considering the design of a rather complex testbench [...]

Nice description of a problem that, as you say, is common to
almost any decent-sized verification project.

Almost any advice I could offer would be more-or-less an
echo of the sensible, practical suggestions in Janick
Bergeron's well-known "Writing Testbenches" book.  
I think he misses a few tricks about the best way for
the upper levels of a TB to interact with its
bus-functional models (BFMs), but otherwise it's
pretty much on the money.

Thanks - I'll definitely order it. Any other good books you can
recommend on writing testbenches ?

Yes, but most of the really "interesting" bugs are likely to
be related to the interactions between those functions...

Yes, this is why I planned each function-testbench to instantiate the
whole top-level DUT, so as not to lose those valuable internal
interfaces. When writing TBs, we always attempt to treat the DUTs as
black boxes, without getting inside. It really helps to test the whole
path from the top level of the FPGA to internal functions.
I'd certainly go for that.  Better still, build a "test harness" -
the structural guts of the TB, containing your DUT instance and
all its life-support systems such as clock and reset generators,
external component models, BFMs for testbench stimulus and
monitoring, and so on.  Take very, very great care to give that
test harness the cleanest, simplest possible interface on its
ports.  Ideally, there should be one BFM for each major
interface of the DUT, and each BFM should expose only two
record ports on the TB side - one "request" port and one
"response" port.  You don't need handshakes at that level;
BFMs and TB can both use the 'transaction attribute on those
record signals to detect each new request or response.  

The top level of this test harness is likely to be
horrible, with all the DUT-connection signals, DUT
instance and a pile of other big things instantiated too.
But you write it exactly once for the whole project.

You can now instantiate this test harness into a top-level
testbench containing only a few processes that sequence the
activity of the various BFMs.  With luck, this top-level
TB will have only a few hundred lines of code and will be
fairly easy to modify to create alternative test sequences.
If you want to exercise your blocks one at a time, you can
easily write a top-level TB that leaves most of the BFMs
idle and merely exercises just one of them.  Your top level
TB can easily randomize some of its behavior if you wish;
both the contents and the timing of the transactions (record
values) that you send to the BFMs can be adjusted by
creative use of ieee.math_real.uniform().
<snip>

A very interesting idea! While I mused about something similar, I
couldn't think it through to a full solution on my own. Let me see if
I get you straight. You propose to 'dress' my DUT in a harness
convenient for simulation, with simple transaction-like record ports
instead of complex signal interfaces. So far I understand.
But then, for each tested function, I suppose I should have a separate
TB that instantiates this harness, right ?

This makes it more similar to method (2) in my original message, with
some of the same problems. For instance, for each of the testbenches I
will have to compile all the DUT files separately, this giving some
duplication in compile scripts. However, much of the duplication of
instantiating the DUT with its 400 pins over and over again is saved,
because the harness presents a very simple interface to the outside -
one that can be only partially instantiated.

Did I understand you correctly ?

Thanks for the advice!
Eli
 
M

Mike Treseler

Eli said:
I'm now considering the design of a rather complex testbench, and want
to consult the dwellers of this list on the best strategy to adopt.

Interesting question.
This FPGA has
several almost unrelated functions, which it makes sense to check
separately. So I'm in doubt as to how to structure my testbench. As I
see it, there are two options:

1) One big testbench for everything, and some way to sequence the
tests of the various functions

I certainly have to be able to wiggle all the inputs
and watch all the outputs, so I don't see any way
to get around this one. And as long as I have
to do this anyway, it is probably a good idea
to make top testbench drill as deep as my schedule allows.
2) Separate testbenches, one for each function

I don't have a final answer for this one.
Design time often alternates between top down
and bottom up, and an entity/module is the
only synthesizable object that a testbench can test.
I design some modules, I reuse or buy others.
One module may be a simple synchronizer.
Another might be an entire front end.
I'm sure this issue haunts every verification engineer at one time or
another. What approach do you use - 1, 2 or something else, and why ?

In the something else category, i might add

1. Assertion testing of packaged functions.
If I can extract a vhdl function from a complex expression,
I can verify this much out of time by testing the package.
I also have the option on reusing these pretested functions.
I might even have some useful assertions to make about constants.

2. Avoid text files, custom languages, BFLs etc.
VHDL has functions and procedures that cover this ground.
VHDL does run out of gas at the higher level verification
abstractions, but I prefer to make due with vanilla vhdl.

3. I like the discussion in these papers.
http://enpub.fulton.asu.edu/cse517/bfm.pdf
http://www.syncad.com/paper_coding_techniques_icu_sep_2003.htm

-- Mike Treseler
 
M

Mark McDougall

Eli said:
1) One big testbench for everything, and some way to sequence the
tests of the various functions

In my experience, that's the way to go. You always end up with less
duplication, and it's easier to maintain changes which might otherwise
require synchronisation between the various separate testbenches.

Another mechanism I employed - for both synthesis and simulation - is to
define a set of constants in a global package, and assign them in the
package body - that enable/disable various (unrelated) sections of the
design (using if-generate for example). You can then selectively build
various combinations of the design to simplify/speed up simulation and/or
synthesis.

In this way you could, for example, use the exact same testbench framework
but have several simulation project files with unique "global package
bodies" that build different parts of the design for each
functionally-unrelated testbench script.

Regards,
 
E

Eli Bendersky

In my experience, that's the way to go. You always end up with less
duplication, and it's easier to maintain changes which might otherwise
require synchronisation between the various separate testbenches.

Another mechanism I employed - for both synthesis and simulation - is to
define a set of constants in a global package, and assign them in the
package body - that enable/disable various (unrelated) sections of the
design (using if-generate for example). You can then selectively build
various combinations of the design to simplify/speed up simulation and/or
synthesis.

In this way you could, for example, use the exact same testbench framework
but have several simulation project files with unique "global package
bodies" that build different parts of the design for each
functionally-unrelated testbench script.

Sounds like an interesting approach that can save simulation/synthesis
time. One problem I see with it is that the DUT is not always under
our control. As far as I'm concerned, it's sometimes just a black box
I can assume nothing about except its spec.

Eli
 
M

Martin Thompson

One potential drawback of the mother-of-a-test-harness-monster-code,
though, can be efficiency. Lighting up all sorts of PLL/DCM and high-
speed SERDES components in simulation can be pretty slow if you get to
gate level sims, especially if you have to light up every single one
in a large FPGA just to test one of the ten subfunctions. At the RTL
level, some very creative use of abstract high level models for, say
SERDES, can speed up runtime drastically while still letting you
validate the other 99% of the logic inside the guts of the chip.

Indeed.

I have several "layers" of TB, so I can run a restricted set of
regression tests on (say) the algorithmic part of my design, which
presents a record for its configuration to the encompassing element.
I can then sidestep the tedious setup of all the registers (which is
through SPI) and just concentrate on validating the algorithm bit.

The "right-answers" for the algorithm testbench are generated from the
Matlab environment in which the algorithm is developed. So new
algorithmic regression tests are very easily added as new situations
are discovered in "algorithm-land".

The SPI regs have their own TB to check their operation in isolation.

Large lumps of the algorithm TB are then instantiated in the higher level
testbenches, which checks that I've wired them up right etc. This can
be recycled for the post-synth and post-PAR sims which occasionally
(very occasionally in the case of the post-PAR, for power anaylsis data!)

At the top I have a python script which finds all my regression tests
and feeds them to vsim to make sure they all pass. I run this
periodically, as well as before release, but most of the time I am
working down the hierarchy, so my TBs concentrate on testing just the
lower functionality.

Cheers,
Martin
 
K

KJ

Hello,

The testbench is for a complex multi-functional FPGA. This FPGA has
several almost unrelated functions, which it makes sense to check
separately. So I'm in doubt as to how to structure my testbench. As I
see it, there are two options:

1) One big testbench for everything, and some way to sequence the
tests of the various functions
2) Separate testbenches, one for each function

Each approach has its pros and cons. Option (2) sounds the most
logical, at least at first, but when I begin pursuing it, some
problems emerge. For example, too much duplication between testbenches
- in all of them I have to instantiate the large top-level entity of
the FPGA, write the compilation scripts for all the files, and so on.

I would have a separate testbench for each of these individual
functions, however they would not be instantiating the top level FPGA
at all. Each function presumably has it's own 'top' level that
implements that function, the testbench for that function should be
putting that entity through the wringer making sure that it works
properly. Having a single instantiation of the top level of the
entire design to test some individual sub-function tends to
dramatically slow down simulation which then produces the following
rather severe consequences:

- Given an aribtrary period of wall clock time, less testing of a
particular function can be performed.
- Less testing implies less coverage of oddball conditions.
- When you have to make changes to the function to fix a problem that
didn't happen to show up until the higher level testing was performed,
regression testing will also be hampered by the above two points when
trying to verify that in fixing one problem you didn't create another.

As a basic rule, a testbench should be testing new design content that
occurs roughly at that level not design content that is buried way
down in the design. At the top level of the FPGA design, the only new
design content is the interconnect to the top level functions and the
instantiation of all of the proper components.

Also, I wouldn't necessarily stop at the FPGA top level either. The
FPGA exists on a PCBA with interconnect to other parts (all of which
can be modelled), and the PCBA exists in some final system that simply
needs power and some user input/output (which again can be modelled).
All of this may seem to be somewhat of a side issue but it's not.
Take for example, the design of a DDR controller as a sample design.
That controller should have it's own testbench that rather extensively
tests all of the operational modes. At the top level of the FPGA
though you would simply instantiate it. Exhaustively testing again at
that level would be pretty much wasted time. A better test would be
to model the PCBA (which would instantiate a memory model from the
vendor) and walk a one across the address and data bus since that
effectively verifies the interconnect which at the FPGA top level and
the PCBA level is all the new design content that exists in regards to
the DDR controller.

As another example, let's say your system processes images and
produces JPEG output and you're writing all the code yourself. You
would probably want some extensive low level testing of some of the
basic low level sub-functions like...
- 1d DCT transform
- 2d DCT transform
- Huffman encoding
- Quantizer

Another level of the design would tie these pieces (and all of the
other pieces necessary to make a full JPEG encoder) and test that the
whole thing compresses images properly. At that level, you'd probably
also be extensively varying the image input, the Q tables, the Huffman
tables, the flow control in and out and run lots of images through to
convince yourself that everything is working properly. But you would
be wasting time (and wouldn't really be able to) vary all of the
parameters to those lower level DCT functions since they would likely
be parameterized for things like input/output data width and size but
in the JPEG encoder it doesn't matter if there is a bug that could
only affect an 11x11 element 2d DCT since you're using it in an 8x8
environment. That lower level testbench is the only thing that could
uncover the bug in the 11x11 case but if you limit yourself to
testbenches that can only operate at some higher level you will be
completely blind and think that your DCT is just fine until you try to
use it with some customer who wants some 11x11 DCT for whatever
reason.

Similarly, re-testing those same things that you do vary at the 'image
compression' level at the next higher level would then be mostly
wasted time that could've been better spent somewhere else. Often
times integration does produce conditions that were not considered at
the lower functional testbench level but to catch that what you need
then is to have something that predicts the correct response given the
testbench input and assert on any differences.

Testing needs to occur on various levels, not one (even if that one
level is able to effectively disable all the other functions but the
one is currently interested in). Ideally this testing occurs at every
level where significant new design content is being created. At the
level where 'insignificant' (but still important) new content is
created (like the FPGA top level which simply instantiates and
interconnects), you can generally get more bang for the buck by going
up yet another level (to the PCBA).

Kevin Jennings
 
E

Eli Bendersky

I would have a separate testbench for each of these individual
functions, however they would not be instantiating the top level FPGA
at all.  Each function presumably has it's own 'top' level that
implements that function, the testbench for that function should be
putting that entity through the wringer making sure that it works
properly.  Having a single instantiation of the top level of the
entire design to test some individual sub-function tends to
dramatically slow down simulation which then produces the following
rather severe consequences:

- Given an aribtrary period of wall clock time, less testing of a
particular function can be performed.
- Less testing implies less coverage of oddball conditions.
- When you have to make changes to the function to fix a problem that
didn't happen to show up until the higher level testing was performed,
regression testing will also be hampered by the above two points when
trying to verify that in fixing one problem you didn't create another.

As a basic rule, a testbench should be testing new design content that
occurs roughly at that level not design content that is buried way
down in the design.  At the top level of the FPGA design, the only new
design content is the interconnect to the top level functions and the
instantiation of all of the proper components.

While I agree that this approach is the most suitable, and your JPEG
example is right to the point, it must be stressed that there are some
problems with it.

First of all, the "details devil" is sometimes in the interface
between modules and not inside the modules themselves. Besides, you
can't always know where a module's boundary ends, especially if it's
buried deep in the design.

Second, the guys doing the verification are often not the ones who
wrote the synthesizable code, and hence have little idea of the
modules inside and their interfaces.

Third, the modules inside can change, or their interfaces can change
due to redesign or refactoring, so the testbench is likely to change
more often. On the top level, system requirements are usually more
static.

Eli
 
K

KJ

First of all, the "details devil" is sometimes in the interface
between modules and not inside the modules themselves.

I disagree, the only 'between' modules construct is on the port map to
the modules. The only types of errors that can occur here are:
- Forgot to connect something
- Connected multiple outputs together
- Connect signal 'xyz' to port 'abc' but it should've been connected
to port 'def'

While all of the above three do occur, they are easily found in and
fixed early on. Almost any testbench and/or synthesis result will
flag it.

If instead by "details devils" 'between' modules you mean connecting
incompatible interfaces that you thought were compatible (for example,
port 'abc' is an active high output but it is used by some other
module as an active low input) then I would suggest that you're not
viewing the interface connection logic in the proper light. That
connection logic is itself it's own module and logic set. While the
logic inversion example is of course trivial and likely not a good
candidate for it's own entity/architecture, a more complicated example
requiring more logic (say an 8 bit to 32 bit conversion as a slightly
more involved example) is more ammo that if there is any interface
conversion logic required it should be seen as it's own design and
tested as such before integration into the bigger design.

While I wouldn't bother with a testbench (or even a stand alone
design) for an interface conversion that consisted of simply a logic
inversion, I would for something like a bus size converter.
Besides, you
can't always know where a module's boundary ends, especially if it's
buried deep in the design.

Nor does it matter where a module boundary ends. Either a design
performs the required function given a particular stimulus or it does
not. When it doesn't, the most likely design change required is
something that changes the logic but finding just what line of code
needs changing requires debug regardless of methodology. Except for
the port mapping errors mentioned above which are almost always fixed
really early on, any other changes would be in 'some' module's rtl
code.
Second, the guys doing the verification are often not the ones who
wrote the synthesizable code, and hence have little idea of the
modules inside and their interfaces.

The line between modules that are to be tested by some separate
verification guys and modules that are not, is an arbitrary line that
you yourself define. Part of what goes into the decision about where
you draw the line between which interfaces get rigorous testing and
which do not is time (=money). The higher you go up before testing,
the less control you get in testing and verification.

Also, it is not unreasonable for the verification guys to be adding
non-synthesizable test/verification code right into the design itself.
Third, the modules inside can change, or their interfaces can change
due to redesign or refactoring, so the testbench is likely to change
more often.

The testbenches would be similarly refactored. Assuming a working
design/testbench pair before refactoring then there should also be a
working set of design/testbench pairs after. A designer may very well
create a small module that can be tested at some higher level without
creating undo risk to the overall design but that should tend to be
more of the exception than the rule.

I understand the likely concern regarding cost/effort in maintaining
and migrating testbenches along with the designs but moving all
testing and verification up to the top level because the interfaces up
there don't change much as a methodology is likely to severly limit
the quality of the design verification. It will also tend to inhibit
your ability to adequately regression test the effects of design
changes that address problems that have been found to verify that
something else didn't break.

Those lower level testbenches are somewhat analogous to incoming
receiving/inspection at a manufacturing plant. No manufacturer would
start building product without performing their own inspection and
verification on incoming parts to see that they are within spec before
building them into their product. Similarly, an adequate testbench(s)
should exist for modules before they are integrated into a top level
design. If those testbences are not adequate, beef them up. If they
are then there is not too much point in going hog wild testing the
same things at a higher level.

Kevin Jennings
 
M

Mike Treseler

KJ said:
The line between modules that are to be tested by some separate
verification guys and modules that are not, is an arbitrary line that
you yourself define.

Indeed. There are significant FPGA based projects
where the total number of design and verification
engineers is one. Collecting and maintaining all module tests for
automated regression can be an efficient and effective strategy.
Also, it is not unreasonable for the verification guys to be adding
non-synthesizable test/verification code right into the design itself.

I agree. If source code is untouchable, there is a problem
with the version control process.
Those lower level testbenches are somewhat analogous to incoming
receiving/inspection at a manufacturing plant. No manufacturer would
start building product without performing their own inspection and
verification on incoming parts to see that they are within spec before
building them into their product. Similarly, an adequate testbench(s)
should exist for modules before they are integrated into a top level
design. If those testbences are not adequate, beef them up. If they
are then there is not too much point in going hog wild testing the
same things at a higher level.

Good point.
I wouldn't use anyone's IP core without
a testbench, including my own.

-- Mike Treseler
 
A

Andy

Indeed.  There are significant FPGA based projects
where the total number of design and verification
engineers is one. Collecting and maintaining all module tests for
automated regression can be an efficient and effective strategy.


I agree. If source code is untouchable, there is a problem
with the version control process.


Good point.
I wouldn't use anyone's IP core without
a testbench, including my own.

       -- Mike Treseler

I agree that it is a good thing to be able to test submodules
individually. However, in some designs the "system" can be greatly
simplified, and the same system level interface (re)used to test the
individual modules, which can reduce the effort required to create the
module level test benches in the first place.

If the RTL includes generics that can "disable" individual modules,
and those generics are passed up to the top of the DUT (defaulted to
enable everything), then a testbench can instantiate the DUT with
generics that disable all but the module under test, yet re-use the
same system level interface routines to do so, while avoiding the wall-
clock time penalties of simulating a far more complex system than what
you are interested in.

Of course, every project is different, and some include too much
overhead to incorporate the system level interface to make this
practical.

Andy
 
M

Mike Treseler

Andy said:
I agree that it is a good thing to be able to test submodules
individually. However, in some designs the "system" can be greatly
simplified, and the same system level interface (re)used to test the
individual modules, which can reduce the effort required to create the
module level test benches in the first place.

I wasn't going to rock the BFM boat, but I also use generics
and along with shell scripts to to make a regression.
I list command options at the top of the testbench
for interactive use something like this:

-- force an error 0:
-- vsim -Gwire_g=stuck_hi -c test_uart -do "run -all; exit"
-- change template:
-- vsim -Gtemplate_g=s_rst -c test_uart -do "run -all; exit"
-- slow baud:
-- vsim -Gtb_tics_g=42 -c test_uart -do "run -all; exit"
-- verify strobe calibration:
-- vsim -Gtb_tics_g=42 test_uart
-- then "do uart.do"

... and later code up a script to play with coverage.

-- Mike Treseler
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top