timing verification

Discussion in 'VHDL' started by alb, Oct 14, 2013.

  1. alb

    alb Guest

    Hi everyone,

    I'm at the point where I might need to verify timing relationships
    between two or more signals/variables in my code.

    I've read PSL assertions can serve the need, but what if I need this
    task to be done by yesterday and I do not know PSL at all? Any
    suggestion on investing some of my time in learning PSL?

    Since my signals/variables are nowhere near the top entity port
    definition, how can I verify the timing through a testbench? I may
    understand how to verify external interfaces, but I guess is going to be
    hard to do that on internal objects.

    As of today I'm listing the signals on a piece of paper, writing down
    timing relationships and then checking them off with a waveform
    viewer...extremely painful and inefficient!

    Any pointer is appreciated.

    alb, Oct 14, 2013
    1. Advertisements

  2. alb

    HT-Lab Guest

    Hi Al,

    PSL is a great language to have in your EDA toolset but as you don't
    have time to learn it I would suggest you check out OVL which are
    monitor/checkers written in VHDL/Verilog. OVL is supplied with most
    (all?) simulators and some can even be synthesised. If you have access
    to Modelsim then try the OVL assertion manager which allows you to
    configure them using a GUI.

    As a final suggestion, once you have some more time I would suggest you
    do get onto a PSL/SVA course as it opens up the wonderful world of
    Functional and Formal Verification.

    Good luck,
    HT-Lab, Oct 15, 2013
    1. Advertisements

  3. alb

    alb Guest

    Hi Hans,

    On 15/10/2013 09:45, HT-Lab wrote:
    Uhm, it looks to me ModelSim SE (the one I have) does not support PSL,
    but isn't PSL formally part of VHDL-2008? I've read here
    (http://osvvm.org/forums/topic/psl-or-sva) about even further possible
    integration of PSL into VHDL.

    Thanks for the OVL hint! I'll start to play with it and see where it
    leads to.
    I do functional verification through TLM in VHDL and I'm about to dig in
    the usage of OSVVM for constrained coverage. Unfortunately this approach
    does not give me the level of observability I needed, especially on
    internal bus protocols, that's why I wanted to explore other means.
    alb, Oct 15, 2013
  4. alb

    rickman Guest

    Sounds like I am a bit behind in this. I have always verified timing by
    setting timing constrains for the P&R tool which verifies the design
    meets the timing constraints. Is there some other aspect of timing that
    constraints don't work for?

    PSL, OVL, SVA, OSVVM, TLM...??? I need to check this out I expect.

    For timing at higher levels, like "does this happen three clock cycles
    after that happens", I have always just verified that by inspection or
    functional simulation. Are these tools ways of adding formal
    verification of this sort of timing?
    rickman, Oct 15, 2013
  5. alb

    HT-Lab Guest

    Hi Alb,

    Modelsim SE is an obsolete product (for many years), speaks to Mentor to
    see if you can exchange it for a (hopefully free) Questa Core license
    (which does support PSL/SVA).
    It is but that doesn't mean you get it for free ;-), it is the same with
    SystemVerilog, the language is split into 2 parts and you only get
    access to the Verification part if you pay a lot of money.
    Interesting read, one point I would make is that you can steer your
    testbench from PSL. Apart from using Tcl you can use PSL endpoints which
    are supported by Modelsim and I assume other simulators.
    Not sure I understand as TLM is nothing but an abstraction layer, it
    doesn't allow you to verify functionality in your design unless you
    write your own checkers.
    The OS-VVM is a great framework, unfortunately most engineers
    (managers?) still think they have to move to SystemVerilog to add a
    re-usable CR verification framework to their environment.

    HT-Lab, Oct 16, 2013
  6. alb

    HT-Lab Guest

    Hi Rickman,

    On 15/10/2013 23:42, rickman wrote:
    You use assertions to make sure that your timing constraints itself are
    correct. For example, there are occasions where it is not easy to
    determine if you always have a multi-cycle path (e.g. a designs with
    multiple clock domains). In this case an assertion can help (or prove
    exhaustively if you use a formal tool). For false path an assertion even
    becomes a must have as checking them manually (i.e. looking at a gate
    level schematic!) is an error prone and a very time consuming activity.
    You can quite easily add an embedded assertion (supplied with the design
    itself) that always checks that A happens 3 clock cycles after B. The
    problem with Verilog/VHDL is that they are very verbose and hence more
    susceptible to bugs, languages like PSL/SVA are designed for this task
    and hence are very concise, easy to use for sequences and un-ambiguous.
    You can write a page of VHDL/Verilog or 1 line of PSL/SVA.

    HT-Lab, Oct 16, 2013
  7. alb

    alb Guest

    Hi Hans,

    On 16/10/2013 09:49, HT-Lab wrote:
    The Actel design flow uses ModelSim SE and the whole team (ok, three
    people!) uses it. I'm not sure it would be an easy switch.
    Crap! Ok, I guess I'm going to live with this fact unless I want to
    build up a revolution, which does not quite fit my current schedule ;-)
    Ok, I just had a look at 'endpoints' in PSL, but how do you pass that
    information to your running vhdl testbench???
    through the bus functional model I get access to all interfaces of my
    top level entity and verify all specs on those interfaces. The problem
    comes when I add IPs to my logic and I need to verify the internal
    interface to that IP is according to the IP specs. That's why I wanted
    to be able to verify timing of internal interfaces.

    True, I may still get a failure in the IP interfaces which is observable
    on my external interfaces, but then it would be hard to know what is not

    In principle I could perform unit testing, but this requires a lot of
    overhead and very few of the verification code would be reusable. My
    approach was inspired by this paper:


    Even though I did not go through the hassle of instantiating (or
    configuring) incremental pieces of the DUT because of other reasons
    (starting from a very bad structural choice in the DUT components).
    I'm about to start a hot discussion about the use of OSVVM vs.
    SystemVerilog for verification in my working environment, I guess SV has
    the lead in system verification just because OSVVM was not there at the
    beginning. The main problem I see is the overall support of SV vs OSVVM,
    in terms of verification IPs and packages, know-how.
    alb, Oct 16, 2013
  8. alb

    alb Guest

    Hi Rick,

    On 16/10/2013 00:42, rickman wrote:
    Let me add just two additional comments to what Hans has already said.

    IMO the first problem comes with 'state observability', that is you have
    been able to stimulate a logic path (code coverage) *and* you are able
    to observe the effect.

    In my case I have to deal with a very simple processor data bus with
    address and data, WR and RD and few other control signals. The result of
    the transaction on the bus will be some data ending up in my output
    register. Externally I have access only to the output register,
    therefore I can only see whether the data I expect will be there or not,
    and if not I have to look at the waveforms and/or the code. Moreover if
    I do changes that affect the interface I would need to perform this
    analysis again. With the tools discussed here you can add observability
    to your design without the need to add extra logic or bring a lot of
    test points out. So your simulation can even check that those changes
    have not affected the bus access requirements.

    Secondly there's another, and maybe more important, aspect: constrained
    random stimuli. Direct testing might not be sufficient for full
    functional coverage (or at least might be a very lengthy process),
    therefore you may need to add constrained random stimuli to reach
    coverage goals. At this point it is very unpractical to 'inspect' the
    waveforms and guarantee that for each transaction the requirements are met.


    alb, Oct 16, 2013
  9. alb

    Andy Guest


    Excellent point about state observability on internal interfaces.

    I have used "wrappers" around internal components/entities that include VHDL assertions (or could use OVL, PSA, etc.) to verify certain aspects of theinterface.

    I am planning on adding OSVVM coverage model(s) to the wrapper so that I can capture/monitor how well the interface is exercised from the system levelinterfaces. Wrappers could even be nested.

    You can also "modify" the interface via the wrapper for easier system verification. For example, your entity may have a data output with a corresponding valid output that signifies when the data output is valid and safe to use. The wrapper can drive 'X's on the data port when the valid port is not asserted, ensuring that the consumer does not use the data except when it isvalid.

    I have seen so many cases where the data consumer violated the protocol, but still worked only because the data producer kept the data valid for longer than indicated. One small change to the producer (maybe an optimization),and suddenly the consumer will quit working, even thought the producer is still following the protocol.

    My wrappers take the form of an an additional architecture for the same entity it wraps (the wrapper re-instantiates the component/entity within itself). These can be inserted either using configurations (if you use components), or by making sure that your simulation compile scripts compile the wrapper architecture AFTER the RTL architecture, if you use entity instantiations. In the latter case, the system RTL should not specify an architecture for those entities, but the wrapper's instantiation of the entity (itself) should specify the RTL architecture. That way, if no wrapper architecture iscompiled (or is compiled after the RTL architecture), no wrapper is used (as might be the case for other test cases or for synthesis.)

    I keep the wrapper architectures in separate files in the testbench source folder(s).

    Andy, Oct 16, 2013
  10. alb

    HT-Lab Guest

    Hi Alb,

    It is the same product with some extra features and a different name ;-)

    Very simple, as soon as you define an endpoint you get an extra boolean
    signal (endp below) in your design. Example:

    architecture rtl of test_tb is

    -- psl default clock is rising_edge(clk);
    -- psl sequence seq is {a;b};
    -- psl endpoint endp is {seq};

    if endp=TRUE then
    -- change testbench behaviour
    end if;
    end process;
    end architecture rtl;

    As soon as "a" is asserted followed by "b" in the next clock cycle the
    endpoint endp becomes true. You can treat endp like any other signal,
    drag in the waveform window etc.

    Yes, you are right, if you planning to use third party verification IP
    then VMM/OVM/UVM (i.e. all SystemVerilog) is the only way forward.

    HT-Lab, Oct 16, 2013
  11. alb

    alb Guest

    Hi Andy,

    So that's the place you can actually perform protocol checks! But then
    how do you collect or steer the testbench based on these checks? Other
    than 'severity' levels to either break or continue the simulation,
    there's a way to access this information from the TB?

    AKAIK you cannot access through hierarchies of entities unless your
    wrapper has out ports which provide a connection to the upper levels.
    This would mean that the top entity wrapper would be populated with a
    bunch of ports related to internal interfaces' wrappers... Am I missing

    But I guess I got your point, since is for verification purposes only
    you can have multiple drivers and enforce the requirement, forcing the
    consumer to break 'early' in the verification process.
    Let alone when the producer is out of specs (say a too early
    integration) and you need to tight those requirements to make it
    working. While the producer gets fixed according to new requirements you
    may still proceed with the verification process of the consumer.
    I use vmk to generate Makefiles for compilation. It resolves the
    dependencies automatically analyzing the code structure and since the
    wrapper 'depends' on the rtl architecture the condition is met.

    I should say that I'd like to start using configurations more regularly,
    but never had a deep motivation for it yet.
    The only limitation I see is that you can only use this approach during
    pre-synth simulation. For post-synth simulation I get a flat netlist
    where wrappers are not existing anymore. Do you need to change/adapt
    your testbenches to accommodate this? Am I missing something else?
    alb, Oct 17, 2013
  12. alb

    alb Guest

    Hi Hans,

    On 16/10/2013 16:41, HT-Lab wrote:
    Ok, does it mean that 'a' and 'b' assertions are visible at the top
    level even if they are defined in the inner interfaces and/or elements
    of your logic?
    Uhm I might then have misunderstood what Jim Lewis referred to in the
    link I provided earlier in this thread. As I understand it, there's no
    way currently to access information gathered by PSL (I presume unless
    it's on the same hierarchical level), is this correct?
    alb, Oct 17, 2013
  13. alb

    Andy Guest


    For interaction with the rest of the testbench from within the wrapper, youhave a few different options that don't require adding signals to the DUT hierarchy and port maps.

    You can use global signals or shared variables (including method calls on those shared variables) declared in a simulation package. These are often good for maintaining a global pass/fail status and/or error count.

    In VHDL-2008, you can also hierarchically access signals without going through formal ports to get them.

    I have also simply used assert/report statements to put messages in the simulation log that can be post-processed.

    I have steadily moved away from using configurations. The cost of maintaining them and the component declarations they rely upon is too high. The wrapper architectures are one way I have avoided using configurations. From thewrapper, you can also modify or set the RTL architecture's generics to alter behavior for some tests. The wrapper architecture can also access 'instance_name or 'path_name attributes to find out "where it is" and alter its behavior based on its location. So there isn't much left for which you really need configurations.

    I don't use vmkr, so I don't know how it might work with wrapper architectures. I used to use make to reduce compile time for incremental changes, butthat does not seem to be as big an issue as it used to be. There is something to be said for a script that compiles the simulation (or the DUT) from scratch, the same way every time, regardless of what has or has not changed..

    Wrapper architectures are not compatible with gate level simulations (at least not wrappers for entities within the DUT).

    After synthesis optimizations, retiming, etc. specific internal interface features may not even exist at the gate level.

    However, a wrapper can instantiate the gate level model in place of the RTL..

    Wrappers can also create or instantiate a different model of an RTL entity for improving the simulation performance, or providing internal stimulus, etc.

    Andy, Oct 17, 2013
  14. alb

    alb Guest

    Hi Andy,

    My experience with global signals is quite bad in general and even when
    writing software, every time I had to deal with large amount of global
    variables I ended up regretting that choice...

    I'm not familiar with protected types, but I guess that at least they
    provide some sort of encapsulation with their methods and their private
    data structures. In this case the wrappers might update coverage (write
    access to data structures) while the TB can steer its course accordingly
    (read access to data structures). Keeping data structures separate for
    each interface (or wrapper) might facilitate the effort.
    Ok, this is something I did not know, I should keep reading about the
    main differences between 2008 and previous versions of the standard.
    Yep, that's something that already gives you more observability.
    FYI I'm using vmk, not vmkr. I tried to use the latter but I did have
    problems in compiling it.
    I agree, but I'm kind of used to incremental compilation and do not see
    any pitfall in it, but it is possible that my understanding of the
    process is somewhat limited.
    The compilation order has to be taken care of anyhow and this is
    something that so far tools have asked the users to do (AFAIK). If I
    have to insert a new entity in my code I simply run vmk first:
    and then 'make'. The dependencies are automatically found and I do not
    need to know where to put my new file in the list of files to be
    compiled. Moreover, if I need to add a component that has several other
    components in its hierarchy the hassle grows if everything should be
    handled manually, but not if there's a tool handy.

    What is the benefit of running the simulation from scratch the same way
    every time?
    Uhm, that's interesting indeed. Meaning that for integration purposes of
    several IPs you may still use wrappers and benefit from their
    advantages. The simulation would still be a functional one, but some of
    the elements might be gate level models.

    You could in principle have behavioral models (even not synthesizable),
    just to proceed with the functional verification and get the simulation
    framework in place before the rtl model is ready. Using rtl libraries
    instead of behavioral would be sufficient to switch.
    I guess at this point I have no more excuses for not using wrappers! ;-)
    alb, Oct 18, 2013
  15. alb

    HT-Lab Guest

    Hi Alb,

    Sorry I should have mentioned the above was just a bit of pseudo code, a
    and b are std_logic signals. Also (before anybody corrects me ;-) the
    curly braces on {seq} are redundant as seq is already defined as a sequence.
    You are correct that endpoints are only valid at the current level,
    however, as it is a signal you should be able to reference it at a
    different level using VHDL2008 hierarchical references or vendors own
    solution like Signalspy in Modelsim.

    HT-Lab, Oct 18, 2013
  16. alb

    Andy Guest


    For simulation or synthesis, you want to ensure your tool is running from only your files (from the repository), not from some version already sitting in a library inside the tool somewhere.

    Especially for synthesis, I have seen compilation order (or skipping re-compilation of previously compiled unchanged files) affect optimizations. Now I always synthesize from scratch ("Run->Resynthesize All" or "project -run synthesis -clean")

    Andy, Oct 18, 2013
  17. Timing relationships are best covered by static timing analysis.

    -- Mike Treseler
    Mike Treseler, Oct 20, 2013
  18. alb

    alb Guest

    Hi Mike,

    How would you use static timing analysis to verify that a read operation
    has happened 'n clocks' after a write operation? Or that a sequence of
    events has happened?

    I've always used static timing analysis to verify that propagation
    delays were smaller than my clock rate (corrected for setup/hold time),
    but nothing more than that.
    alb, Oct 21, 2013
  19. Static timing covers setup and hold timing relationships for all registers
    in the programmable device and external registers driving or
    reading the programmable device ports.

    'n clocks' type verifications for a synchronous design that meets Fmax static timing, can be covered by a synchronous testbench that meets the setup and hold pin requirements of the programmable device design.
    Yes, device Fmax is the basic constraint and the easiest to use,
    but IO constraints are most critical for the testbench and on the real bench.

    -- Mike Treseler
    Mike Treseler, Oct 28, 2013
  20. alb

    alb Guest

    Hi Mike,

    On 28/10/2013 20:36, Mike Treseler wrote:
    This is what I typically do myself as well, but that wouldn't be
    possible if instead of 'pin requirements' we were talking about
    'interface requirements' of an internal interface of your design.

    I guess the suggestions of using assertions through 'wrappers' may solve
    the issue.
    I deal often with asynchronous interfaces therefore I typically
    synchronize them before use and do not constraint them at all. Actually
    is there any good guide on how to constraint I/O?.
    alb, Oct 28, 2013
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.