timing verification

A

alb

Hi everyone,

I'm at the point where I might need to verify timing relationships
between two or more signals/variables in my code.

I've read PSL assertions can serve the need, but what if I need this
task to be done by yesterday and I do not know PSL at all? Any
suggestion on investing some of my time in learning PSL?

Since my signals/variables are nowhere near the top entity port
definition, how can I verify the timing through a testbench? I may
understand how to verify external interfaces, but I guess is going to be
hard to do that on internal objects.

As of today I'm listing the signals on a piece of paper, writing down
timing relationships and then checking them off with a waveform
viewer...extremely painful and inefficient!

Any pointer is appreciated.

Al
 
H

HT-Lab

Hi everyone,

I'm at the point where I might need to verify timing relationships
between two or more signals/variables in my code.

I've read PSL assertions can serve the need, but what if I need this
task to be done by yesterday and I do not know PSL at all? Any
suggestion on investing some of my time in learning PSL?

Since my signals/variables are nowhere near the top entity port
definition, how can I verify the timing through a testbench? I may
understand how to verify external interfaces, but I guess is going to be
hard to do that on internal objects.

As of today I'm listing the signals on a piece of paper, writing down
timing relationships and then checking them off with a waveform
viewer...extremely painful and inefficient!

Any pointer is appreciated.

Al
Hi Al,

PSL is a great language to have in your EDA toolset but as you don't
have time to learn it I would suggest you check out OVL which are
monitor/checkers written in VHDL/Verilog. OVL is supplied with most
(all?) simulators and some can even be synthesised. If you have access
to Modelsim then try the OVL assertion manager which allows you to
configure them using a GUI.

As a final suggestion, once you have some more time I would suggest you
do get onto a PSL/SVA course as it opens up the wonderful world of
Functional and Formal Verification.


Good luck,
Hans
www.ht-lab.com
 
A

alb

Hi Hans,

On 15/10/2013 09:45, HT-Lab wrote:
[]
I'm at the point where I might need to verify timing relationships
between two or more signals/variables in my code.
[]
PSL is a great language to have in your EDA toolset but as you don't
have time to learn it I would suggest you check out OVL which are
monitor/checkers written in VHDL/Verilog. OVL is supplied with most
(all?) simulators and some can even be synthesised. If you have access
to Modelsim then try the OVL assertion manager which allows you to
configure them using a GUI.

Uhm, it looks to me ModelSim SE (the one I have) does not support PSL,
but isn't PSL formally part of VHDL-2008? I've read here
(http://osvvm.org/forums/topic/psl-or-sva) about even further possible
integration of PSL into VHDL.

Thanks for the OVL hint! I'll start to play with it and see where it
leads to.
As a final suggestion, once you have some more time I would suggest you
do get onto a PSL/SVA course as it opens up the wonderful world of
Functional and Formal Verification.

I do functional verification through TLM in VHDL and I'm about to dig in
the usage of OSVVM for constrained coverage. Unfortunately this approach
does not give me the level of observability I needed, especially on
internal bus protocols, that's why I wanted to explore other means.
 
R

rickman

Hi Hans,

On 15/10/2013 09:45, HT-Lab wrote:
[]
I'm at the point where I might need to verify timing relationships
between two or more signals/variables in my code.
[]
PSL is a great language to have in your EDA toolset but as you don't
have time to learn it I would suggest you check out OVL which are
monitor/checkers written in VHDL/Verilog. OVL is supplied with most
(all?) simulators and some can even be synthesised. If you have access
to Modelsim then try the OVL assertion manager which allows you to
configure them using a GUI.

Uhm, it looks to me ModelSim SE (the one I have) does not support PSL,
but isn't PSL formally part of VHDL-2008? I've read here
(http://osvvm.org/forums/topic/psl-or-sva) about even further possible
integration of PSL into VHDL.

Thanks for the OVL hint! I'll start to play with it and see where it
leads to.
As a final suggestion, once you have some more time I would suggest you
do get onto a PSL/SVA course as it opens up the wonderful world of
Functional and Formal Verification.

I do functional verification through TLM in VHDL and I'm about to dig in
the usage of OSVVM for constrained coverage. Unfortunately this approach
does not give me the level of observability I needed, especially on
internal bus protocols, that's why I wanted to explore other means.

Sounds like I am a bit behind in this. I have always verified timing by
setting timing constrains for the P&R tool which verifies the design
meets the timing constraints. Is there some other aspect of timing that
constraints don't work for?

PSL, OVL, SVA, OSVVM, TLM...??? I need to check this out I expect.

For timing at higher levels, like "does this happen three clock cycles
after that happens", I have always just verified that by inspection or
functional simulation. Are these tools ways of adding formal
verification of this sort of timing?
 
H

HT-Lab

Hi Alb,

Hi Hans,

On 15/10/2013 09:45, HT-Lab wrote:
[]
I'm at the point where I might need to verify timing relationships
between two or more signals/variables in my code.
[]
PSL is a great language to have in your EDA toolset but as you don't
have time to learn it I would suggest you check out OVL which are
monitor/checkers written in VHDL/Verilog. OVL is supplied with most
(all?) simulators and some can even be synthesised. If you have access
to Modelsim then try the OVL assertion manager which allows you to
configure them using a GUI.

Uhm, it looks to me ModelSim SE (the one I have) does not support PSL,

Modelsim SE is an obsolete product (for many years), speaks to Mentor to
see if you can exchange it for a (hopefully free) Questa Core license
(which does support PSL/SVA).
but isn't PSL formally part of VHDL-2008? I've read here

It is but that doesn't mean you get it for free ;-), it is the same with
SystemVerilog, the language is split into 2 parts and you only get
access to the Verification part if you pay a lot of money.
(http://osvvm.org/forums/topic/psl-or-sva) about even further possible
integration of PSL into VHDL.

Interesting read, one point I would make is that you can steer your
testbench from PSL. Apart from using Tcl you can use PSL endpoints which
are supported by Modelsim and I assume other simulators.
Thanks for the OVL hint! I'll start to play with it and see where it
leads to.


I do functional verification through TLM in VHDL and I'm about to dig in

Not sure I understand as TLM is nothing but an abstraction layer, it
doesn't allow you to verify functionality in your design unless you
write your own checkers.
the usage of OSVVM for constrained coverage. Unfortunately this approach

The OS-VVM is a great framework, unfortunately most engineers
(managers?) still think they have to move to SystemVerilog to add a
re-usable CR verification framework to their environment.

Regards,
Hans.
 
H

HT-Lab

Hi Rickman,

On 15/10/2013 23:42, rickman wrote:
...
Sounds like I am a bit behind in this. I have always verified timing by
setting timing constrains for the P&R tool which verifies the design
meets the timing constraints. Is there some other aspect of timing that
constraints don't work for?

You use assertions to make sure that your timing constraints itself are
correct. For example, there are occasions where it is not easy to
determine if you always have a multi-cycle path (e.g. a designs with
multiple clock domains). In this case an assertion can help (or prove
exhaustively if you use a formal tool). For false path an assertion even
becomes a must have as checking them manually (i.e. looking at a gate
level schematic!) is an error prone and a very time consuming activity.
PSL, OVL, SVA, OSVVM, TLM...??? I need to check this out I expect.

For timing at higher levels, like "does this happen three clock cycles
after that happens", I have always just verified that by inspection or
functional simulation. Are these tools ways of adding formal
verification of this sort of timing?

You can quite easily add an embedded assertion (supplied with the design
itself) that always checks that A happens 3 clock cycles after B. The
problem with Verilog/VHDL is that they are very verbose and hence more
susceptible to bugs, languages like PSL/SVA are designed for this task
and hence are very concise, easy to use for sequences and un-ambiguous.
You can write a page of VHDL/Verilog or 1 line of PSL/SVA.

Regards,
Hans.
 
A

alb

Hi Hans,

On 16/10/2013 09:49, HT-Lab wrote:
[]
Modelsim SE is an obsolete product (for many years), speaks to Mentor to
see if you can exchange it for a (hopefully free) Questa Core license
(which does support PSL/SVA).

The Actel design flow uses ModelSim SE and the whole team (ok, three
people!) uses it. I'm not sure it would be an easy switch.
It is but that doesn't mean you get it for free ;-), it is the same with
SystemVerilog, the language is split into 2 parts and you only get
access to the Verification part if you pay a lot of money.

Crap! Ok, I guess I'm going to live with this fact unless I want to
build up a revolution, which does not quite fit my current schedule ;-)
Interesting read, one point I would make is that you can steer your
testbench from PSL. Apart from using Tcl you can use PSL endpoints which
are supported by Modelsim and I assume other simulators.

Ok, I just had a look at 'endpoints' in PSL, but how do you pass that
information to your running vhdl testbench???
Not sure I understand as TLM is nothing but an abstraction layer, it
doesn't allow you to verify functionality in your design unless you
write your own checkers.

through the bus functional model I get access to all interfaces of my
top level entity and verify all specs on those interfaces. The problem
comes when I add IPs to my logic and I need to verify the internal
interface to that IP is according to the IP specs. That's why I wanted
to be able to verify timing of internal interfaces.

True, I may still get a failure in the IP interfaces which is observable
on my external interfaces, but then it would be hard to know what is not
working.

In principle I could perform unit testing, but this requires a lot of
overhead and very few of the verification code would be reusable. My
approach was inspired by this paper:

http://www.synthworks.com/papers/VHDL_Subblock_Verification_DesignCon_2003_P.pdf

Even though I did not go through the hassle of instantiating (or
configuring) incremental pieces of the DUT because of other reasons
(starting from a very bad structural choice in the DUT components).
The OS-VVM is a great framework, unfortunately most engineers
(managers?) still think they have to move to SystemVerilog to add a
re-usable CR verification framework to their environment.

I'm about to start a hot discussion about the use of OSVVM vs.
SystemVerilog for verification in my working environment, I guess SV has
the lead in system verification just because OSVVM was not there at the
beginning. The main problem I see is the overall support of SV vs OSVVM,
in terms of verification IPs and packages, know-how.
 
A

alb

Hi Rick,

On 16/10/2013 00:42, rickman wrote:
[]
Sounds like I am a bit behind in this. I have always verified timing by
setting timing constrains for the P&R tool which verifies the design
meets the timing constraints. Is there some other aspect of timing that
constraints don't work for?

PSL, OVL, SVA, OSVVM, TLM...??? I need to check this out I expect.

For timing at higher levels, like "does this happen three clock cycles
after that happens", I have always just verified that by inspection or
functional simulation. Are these tools ways of adding formal
verification of this sort of timing?

Let me add just two additional comments to what Hans has already said.

IMO the first problem comes with 'state observability', that is you have
been able to stimulate a logic path (code coverage) *and* you are able
to observe the effect.

In my case I have to deal with a very simple processor data bus with
address and data, WR and RD and few other control signals. The result of
the transaction on the bus will be some data ending up in my output
register. Externally I have access only to the output register,
therefore I can only see whether the data I expect will be there or not,
and if not I have to look at the waveforms and/or the code. Moreover if
I do changes that affect the interface I would need to perform this
analysis again. With the tools discussed here you can add observability
to your design without the need to add extra logic or bring a lot of
test points out. So your simulation can even check that those changes
have not affected the bus access requirements.

Secondly there's another, and maybe more important, aspect: constrained
random stimuli. Direct testing might not be sufficient for full
functional coverage (or at least might be a very lengthy process),
therefore you may need to add constrained random stimuli to reach
coverage goals. At this point it is very unpractical to 'inspect' the
waveforms and guarantee that for each transaction the requirements are met.

HTH,

Al
 
A

Andy

Al,

Excellent point about state observability on internal interfaces.

I have used "wrappers" around internal components/entities that include VHDL assertions (or could use OVL, PSA, etc.) to verify certain aspects of theinterface.

I am planning on adding OSVVM coverage model(s) to the wrapper so that I can capture/monitor how well the interface is exercised from the system levelinterfaces. Wrappers could even be nested.

You can also "modify" the interface via the wrapper for easier system verification. For example, your entity may have a data output with a corresponding valid output that signifies when the data output is valid and safe to use. The wrapper can drive 'X's on the data port when the valid port is not asserted, ensuring that the consumer does not use the data except when it isvalid.

I have seen so many cases where the data consumer violated the protocol, but still worked only because the data producer kept the data valid for longer than indicated. One small change to the producer (maybe an optimization),and suddenly the consumer will quit working, even thought the producer is still following the protocol.

My wrappers take the form of an an additional architecture for the same entity it wraps (the wrapper re-instantiates the component/entity within itself). These can be inserted either using configurations (if you use components), or by making sure that your simulation compile scripts compile the wrapper architecture AFTER the RTL architecture, if you use entity instantiations. In the latter case, the system RTL should not specify an architecture for those entities, but the wrapper's instantiation of the entity (itself) should specify the RTL architecture. That way, if no wrapper architecture iscompiled (or is compiled after the RTL architecture), no wrapper is used (as might be the case for other test cases or for synthesis.)

I keep the wrapper architectures in separate files in the testbench source folder(s).

Andy
 
H

HT-Lab

Hi Alb,

Hi Hans,

On 16/10/2013 09:49, HT-Lab wrote:
[]
Modelsim SE is an obsolete product (for many years), speaks to Mentor to
see if you can exchange it for a (hopefully free) Questa Core license
(which does support PSL/SVA).

The Actel design flow uses ModelSim SE and the whole team (ok, three
people!) uses it. I'm not sure it would be an easy switch.

It is the same product with some extra features and a different name ;-)

...
Ok, I just had a look at 'endpoints' in PSL, but how do you pass that
information to your running vhdl testbench???

Very simple, as soon as you define an endpoint you get an extra boolean
signal (endp below) in your design. Example:

architecture rtl of test_tb is

-- psl default clock is rising_edge(clk);
-- psl sequence seq is {a;b};
-- psl endpoint endp is {seq};

begin
process
begin
if endp=TRUE then
-- change testbench behaviour
end if;
end process;
end architecture rtl;

As soon as "a" is asserted followed by "b" in the next clock cycle the
endpoint endp becomes true. You can treat endp like any other signal,
drag in the waveform window etc.

...
I'm about to start a hot discussion about the use of OSVVM vs.
SystemVerilog for verification in my working environment, I guess SV has
the lead in system verification just because OSVVM was not there at the
beginning. The main problem I see is the overall support of SV vs OSVVM,
in terms of verification IPs and packages, know-how.

Yes, you are right, if you planning to use third party verification IP
then VMM/OVM/UVM (i.e. all SystemVerilog) is the only way forward.

Regards,
Hans.
www.ht-lab.com
 
A

alb

Hi Andy,

I have used "wrappers" around internal components/entities that
include VHDL assertions (or could use OVL, PSA, etc.) to verify
certain aspects of the interface.

I am planning on adding OSVVM coverage model(s) to the wrapper so
that I can capture/monitor how well the interface is exercised from
the system level interfaces. Wrappers could even be nested.

You can also "modify" the interface via the wrapper for easier system
verification. For example, your entity may have a data output with a
corresponding valid output that signifies when the data output is
valid and safe to use. The wrapper can drive 'X's on the data port
when the valid port is not asserted, ensuring that the consumer does
not use the data except when it is valid.

So that's the place you can actually perform protocol checks! But then
how do you collect or steer the testbench based on these checks? Other
than 'severity' levels to either break or continue the simulation,
there's a way to access this information from the TB?

AKAIK you cannot access through hierarchies of entities unless your
wrapper has out ports which provide a connection to the upper levels.
This would mean that the top entity wrapper would be populated with a
bunch of ports related to internal interfaces' wrappers... Am I missing
something?

But I guess I got your point, since is for verification purposes only
you can have multiple drivers and enforce the requirement, forcing the
consumer to break 'early' in the verification process.
I have seen so many cases where the data consumer violated the
protocol, but still worked only because the data producer kept the
data valid for longer than indicated. One small change to the
producer (maybe an optimization), and suddenly the consumer will quit
working, even thought the producer is still following the protocol.

Let alone when the producer is out of specs (say a too early
integration) and you need to tight those requirements to make it
working. While the producer gets fixed according to new requirements you
may still proceed with the verification process of the consumer.
My wrappers take the form of an an additional architecture for the
same entity it wraps (the wrapper re-instantiates the
component/entity within itself). These can be inserted either using
configurations (if you use components), or by making sure that your
simulation compile scripts compile the wrapper architecture AFTER the
RTL architecture, if you use entity instantiations.

I use vmk to generate Makefiles for compilation. It resolves the
dependencies automatically analyzing the code structure and since the
wrapper 'depends' on the rtl architecture the condition is met.

I should say that I'd like to start using configurations more regularly,
but never had a deep motivation for it yet.
In the latter
case, the system RTL should not specify an architecture for those
entities, but the wrapper's instantiation of the entity (itself)
should specify the RTL architecture. That way, if no wrapper
architecture is compiled (or is compiled after the RTL architecture),
no wrapper is used (as might be the case for other test cases or for
synthesis.)

I keep the wrapper architectures in separate files in the testbench
source folder(s).

The only limitation I see is that you can only use this approach during
pre-synth simulation. For post-synth simulation I get a flat netlist
where wrappers are not existing anymore. Do you need to change/adapt
your testbenches to accommodate this? Am I missing something else?
 
A

alb

Hi Hans,

On 16/10/2013 16:41, HT-Lab wrote:
[]
Very simple, as soon as you define an endpoint you get an extra boolean
signal (endp below) in your design. Example:

architecture rtl of test_tb is

-- psl default clock is rising_edge(clk);
-- psl sequence seq is {a;b};
-- psl endpoint endp is {seq};

begin
process
begin
if endp=TRUE then
-- change testbench behaviour
end if;
end process;
end architecture rtl;

Ok, does it mean that 'a' and 'b' assertions are visible at the top
level even if they are defined in the inner interfaces and/or elements
of your logic?
As soon as "a" is asserted followed by "b" in the next clock cycle the
endpoint endp becomes true. You can treat endp like any other signal,
drag in the waveform window etc.

Uhm I might then have misunderstood what Jim Lewis referred to in the
link I provided earlier in this thread. As I understand it, there's no
way currently to access information gathered by PSL (I presume unless
it's on the same hierarchical level), is this correct?
 
A

Andy

Al,

For interaction with the rest of the testbench from within the wrapper, youhave a few different options that don't require adding signals to the DUT hierarchy and port maps.

You can use global signals or shared variables (including method calls on those shared variables) declared in a simulation package. These are often good for maintaining a global pass/fail status and/or error count.

In VHDL-2008, you can also hierarchically access signals without going through formal ports to get them.

I have also simply used assert/report statements to put messages in the simulation log that can be post-processed.

I have steadily moved away from using configurations. The cost of maintaining them and the component declarations they rely upon is too high. The wrapper architectures are one way I have avoided using configurations. From thewrapper, you can also modify or set the RTL architecture's generics to alter behavior for some tests. The wrapper architecture can also access 'instance_name or 'path_name attributes to find out "where it is" and alter its behavior based on its location. So there isn't much left for which you really need configurations.

I don't use vmkr, so I don't know how it might work with wrapper architectures. I used to use make to reduce compile time for incremental changes, butthat does not seem to be as big an issue as it used to be. There is something to be said for a script that compiles the simulation (or the DUT) from scratch, the same way every time, regardless of what has or has not changed..

Wrapper architectures are not compatible with gate level simulations (at least not wrappers for entities within the DUT).

After synthesis optimizations, retiming, etc. specific internal interface features may not even exist at the gate level.

However, a wrapper can instantiate the gate level model in place of the RTL..

Wrappers can also create or instantiate a different model of an RTL entity for improving the simulation performance, or providing internal stimulus, etc.

Andy
 
A

alb

Hi Andy,

Al,

For interaction with the rest of the testbench from within the
wrapper, you have a few different options that don't require adding
signals to the DUT hierarchy and port maps.

You can use global signals or shared variables (including method
calls on those shared variables) declared in a simulation package.
These are often good for maintaining a global pass/fail status and/or
error count.

My experience with global signals is quite bad in general and even when
writing software, every time I had to deal with large amount of global
variables I ended up regretting that choice...

I'm not familiar with protected types, but I guess that at least they
provide some sort of encapsulation with their methods and their private
data structures. In this case the wrappers might update coverage (write
access to data structures) while the TB can steer its course accordingly
(read access to data structures). Keeping data structures separate for
each interface (or wrapper) might facilitate the effort.
In VHDL-2008, you can also hierarchically access signals without
going through formal ports to get them.

Ok, this is something I did not know, I should keep reading about the
main differences between 2008 and previous versions of the standard.
I have also simply used assert/report statements to put messages in
the simulation log that can be post-processed.

Yep, that's something that already gives you more observability.
I don't use vmkr, so I don't know how it might work with wrapper
architectures.

FYI I'm using vmk, not vmkr. I tried to use the latter but I did have
problems in compiling it.
I used to use make to reduce compile time for
incremental changes, but that does not seem to be as big an issue as
it used to be.

I agree, but I'm kind of used to incremental compilation and do not see
any pitfall in it, but it is possible that my understanding of the
process is somewhat limited.
There is something to be said for a script that
compiles the simulation (or the DUT) from scratch, the same way every
time, regardless of what has or has not changed..

The compilation order has to be taken care of anyhow and this is
something that so far tools have asked the users to do (AFAIK). If I
have to insert a new entity in my code I simply run vmk first:
vmk -o Makefile *.vhd

and then 'make'. The dependencies are automatically found and I do not
need to know where to put my new file in the list of files to be
compiled. Moreover, if I need to add a component that has several other
components in its hierarchy the hassle grows if everything should be
handled manually, but not if there's a tool handy.

What is the benefit of running the simulation from scratch the same way
every time?
Wrapper architectures are not compatible with gate level simulations
(at least not wrappers for entities within the DUT).

After synthesis optimizations, retiming, etc. specific internal
interface features may not even exist at the gate level.

However, a wrapper can instantiate the gate level model in place of
the RTL..

Uhm, that's interesting indeed. Meaning that for integration purposes of
several IPs you may still use wrappers and benefit from their
advantages. The simulation would still be a functional one, but some of
the elements might be gate level models.

You could in principle have behavioral models (even not synthesizable),
just to proceed with the functional verification and get the simulation
framework in place before the rtl model is ready. Using rtl libraries
instead of behavioral would be sufficient to switch.
Wrappers can also create or instantiate a different model of an RTL
entity for improving the simulation performance, or providing
internal stimulus, etc.

I guess at this point I have no more excuses for not using wrappers! ;-)
 
H

HT-Lab

Hi Alb,

Hi Hans,

On 16/10/2013 16:41, HT-Lab wrote:
[]
Very simple, as soon as you define an endpoint you get an extra boolean
signal (endp below) in your design. Example:

architecture rtl of test_tb is

-- psl default clock is rising_edge(clk);
-- psl sequence seq is {a;b};
-- psl endpoint endp is {seq};

begin
process
begin
if endp=TRUE then
-- change testbench behaviour
end if;
end process;
end architecture rtl;

Ok, does it mean that 'a' and 'b' assertions are visible at the top
level even if they are defined in the inner interfaces and/or elements
of your logic?

Sorry I should have mentioned the above was just a bit of pseudo code, a
and b are std_logic signals. Also (before anybody corrects me ;-) the
curly braces on {seq} are redundant as seq is already defined as a sequence.
Uhm I might then have misunderstood what Jim Lewis referred to in the
link I provided earlier in this thread. As I understand it, there's no
way currently to access information gathered by PSL (I presume unless
it's on the same hierarchical level), is this correct?

You are correct that endpoints are only valid at the current level,
however, as it is a signal you should be able to reference it at a
different level using VHDL2008 hierarchical references or vendors own
solution like Signalspy in Modelsim.

Regards,
Hans.
www.ht-lab.com
 
A

Andy

Al,

For simulation or synthesis, you want to ensure your tool is running from only your files (from the repository), not from some version already sitting in a library inside the tool somewhere.

Especially for synthesis, I have seen compilation order (or skipping re-compilation of previously compiled unchanged files) affect optimizations. Now I always synthesize from scratch ("Run->Resynthesize All" or "project -run synthesis -clean")

Andy
 
M

Mike Treseler

I'm at the point where I might need to verify timing relationships
between two or more signals/variables in my code.

I've read PSL assertions can serve the need, but what if I need this
task to be done by yesterday and I do not know PSL at all? Any
suggestion on investing some of my time in learning PSL?

Timing relationships are best covered by static timing analysis.

-- Mike Treseler
 
A

alb

Hi Mike,

I'm at the point where I might need to verify timing relationships
between two or more signals/variables in my code.
[]
Timing relationships are best covered by static timing analysis.

How would you use static timing analysis to verify that a read operation
has happened 'n clocks' after a write operation? Or that a sequence of
events has happened?

I've always used static timing analysis to verify that propagation
delays were smaller than my clock rate (corrected for setup/hold time),
but nothing more than that.
 
M

Mike Treseler

How would you use static timing analysis to verify that a read operation
has happened 'n clocks' after a write operation? Or that a sequence of
events has happened?

Static timing covers setup and hold timing relationships for all registers
in the programmable device and external registers driving or
reading the programmable device ports.

'n clocks' type verifications for a synchronous design that meets Fmax static timing, can be covered by a synchronous testbench that meets the setup and hold pin requirements of the programmable device design.
I've always used static timing analysis to verify that propagation
delays were smaller than my clock rate (corrected for setup/hold time),
but nothing more than that.

Yes, device Fmax is the basic constraint and the easiest to use,
but IO constraints are most critical for the testbench and on the real bench.

-- Mike Treseler
 
A

alb

Hi Mike,

On 28/10/2013 20:36, Mike Treseler wrote:
[]
Static timing covers setup and hold timing relationships for all
registers in the programmable device and external registers driving
or reading the programmable device ports.

'n clocks' type verifications for a synchronous design that meets
Fmax static timing, can be covered by a synchronous testbench that
meets the setup and hold pin requirements of the programmable device
design.

This is what I typically do myself as well, but that wouldn't be
possible if instead of 'pin requirements' we were talking about
'interface requirements' of an internal interface of your design.

I guess the suggestions of using assertions through 'wrappers' may solve
the issue.
Yes, device Fmax is the basic constraint and the easiest to use, but
IO constraints are most critical for the testbench and on the real
bench.

I deal often with asynchronous interfaces therefore I typically
synchronize them before use and do not constraint them at all. Actually
is there any good guide on how to constraint I/O?.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,902
Latest member
Elena68X5

Latest Threads

Top