Structured Verification Request for Information


M

M. Norton

Hello,

In the past, the companies where I worked had less stringent needs for
verification. Quite often being able to use the product on the bench
was sufficient, or slightly more formal simulation results with test
vectors and viewing testbench results in the wave editor manually were
sufficient.

However, my current employer deals with a number of certification
authorities (e.g. the FAA) that impose a rigid audit of processes and
procedures for complex electronic hardware. These strictures have
been tightening over the past several years as they discovered just
how complex these devices can get! It seems to me now that this
company could stand to work with more formalized verification
engineers working hand in hand with the DUT development engineers,
rather than leaving the verification til the end to be banged out
quickly.

So, I'm trying to find a place to get started with this topic. I've
read a white paper by Ben Cohen on transactional verification
performed in HDL which seems interesting. I like the idea of being
able to stay within VHDL for the verification without having to add on
a lot of 3rd party tools. I know there are some languages out there
like SystemVerilog but do not know much about them. I'm not really
sure how all of these fit into a broad depiction of verification, and
how that might apply to our specific devices.

Does anyone have some pointers on where to get started with
structured, formalized verification, without any marketing hype from
tool and language vendors? I need to find a way to get a grounding so
I can make my own assessments of the value of the tools and languages.

Thanks for any help folks can provide.

Best regards,
Mark Norton
 
Ad

Advertisements

H

HT-Lab

M. Norton said:
Hello,

In the past, the companies where I worked had less stringent needs for
verification. Quite often being able to use the product on the bench
was sufficient, or slightly more formal simulation results with test
vectors and viewing testbench results in the wave editor manually were
sufficient.

However, my current employer deals with a number of certification
authorities (e.g. the FAA) that impose a rigid audit of processes and
procedures for complex electronic hardware. These strictures have
been tightening over the past several years as they discovered just
how complex these devices can get! It seems to me now that this
company could stand to work with more formalized verification
engineers working hand in hand with the DUT development engineers,
rather than leaving the verification til the end to be banged out
quickly.

So, I'm trying to find a place to get started with this topic. I've
read a white paper by Ben Cohen on transactional verification
performed in HDL which seems interesting. I like the idea of being
able to stay within VHDL for the verification without having to add on
a lot of 3rd party tools. I know there are some languages out there
like SystemVerilog but do not know much about them. I'm not really
sure how all of these fit into a broad depiction of verification, and
how that might apply to our specific devices.

Does anyone have some pointers on where to get started with
structured, formalized verification, without any marketing hype from
tool and language vendors? I need to find a way to get a grounding so
I can make my own assessments of the value of the tools and languages.

Thanks for any help folks can provide.

Best regards,
Mark Norton

I would suggest you start by reading the free Verification Cookbook from
Mentor (http://www.mentor.com/products/fv/_3b715c/). This cookbook describes
a comprehensive structured test environment. Although they use
SystemVerilog(OVM/AVM)/SystemC(AVM only) you can do a lot of these
techniques in VHDL albeit with a "bit more" programming effort.

Other suggestion is to look at training companies like Doulos and Synthwork,
they have special VHDL verification courses were they discuss techniques
like constrained random(CR), transaction level modelling(TLM), some
functional verification, bus functional models etc.

I would also suggest you look into assertions. Unfortunately getting a PSL
license for most VHDL simulators is not going to be cheap (to put it
mildly), however, the verification benefits you get from assertions is
huge. The next step would be to use a proper formal tool ($$$) like
Solidify/0-in and many others and forget about creating stimuli altogether
:). If you can't afford a PSL license then look into the free OVL
assertion/monitors library.

You can also have a look at SystemC which luckily is not so expensive (at
least not in the case of Modelsim PE). SystemC allows you to easily add
C/C++ models to your environment, gives you access to TLM2, Object
Orientation, transaction recording and Constrained Random. Mixing VHDL and
SystemC is (at least in Modelsim) very simple.

Good luck,

Hans
www.ht-lab.com
 
M

Marcus Harnisch

Hi Mark

I know that the "M"-word (not Mark!) has been used quite a lot in
other follow-ups, but one of the better known EDA vendors is currently
solicting customers with seminars covering mission/safety critical
hardware design (e.g. DO-254). I attended one of these and the
audience was mostly people like you described yourself.

Regards
Marcus
 
N

NigelE

Hi Mark

I know that the "M"-word (not Mark!) has been used quite a lot in
other follow-ups, but one of the better known EDA vendors is currently
solicting customers with seminars covering mission/safety critical
hardware design (e.g. DO-254). I attended one of these and the
audience was mostly people like you described yourself.

Regards
Marcus
--
note that "property" can also be used as syntaxtic sugar to reference
a property, breaking the clean design of verilog; [...]

             (seen onhttp://www.veripool.com/verilog-mode_news.html)

The Mentor web site has a section dedicated to DO-254 design and
verification.

http://www.mentor.com/products/fpga_pld/req_tracking/do-254/index.cfm

You definitely WILL find some EDA marketing material here but there's
also a lot of generic information on the subject.

Regards

- Nigel
Mentor Graphics
 
M

M. Norton

Thanks to everyone for the suggestions. Hopefully I can omnibus the
answers here without cluttering the group with a lot of individual
replies.

I know that the "M"-word (not Mark!) has been used quite a lot in
other follow-ups, but one of the better known EDA vendors is currently
solicting customers with seminars covering mission/safety critical
hardware design (e.g. DO-254). I attended one of these and the
audience was mostly people like you described yourself.

Well DO-254 is precisely the reason some of this is being looked at
more closely. That being said, DO-254 is mainly concerned with
process, and if we had a good process for verification, I don't
believe they care so much about how the verification proceeds. Seems
like from the consultants I've talked to they want to see a plan, a
process, a procedure, and artifacts resulting from all of these
things. None of this says "thou must adopt a more rigorous
verification strategy" if the current one is producing the output
desired.

However, I think another reason is that we seem to be moving more and
more functional behaviors into the devices and they do get
increasingly complex. The complexity coupled with the increased
oversight into all phases of the design process. So, it seems to me
like something more rigorous WOULD be a benefit as long as they're
willing to put the time and money into it up front, rather than trying
to slosh it around the back-end and get through certification by any
method possible.

I think I have heard of the Mentor DO-254 seminar, but they circulated
just as we were getting our feet wet in the ramifications of the
process changes so it might be worth seeing if they're an ongoing
thing.

From Nigel:
The Mentor web site has a section dedicated to DO-254 design and
verification.

http://www.mentor.com/products/fpga_pld/req_tracking/do-254/index.cfm


You definitely WILL find some EDA marketing material here but there's
also a lot of generic information on the subject.

I will check that out. Generic is definitely what I need at the
moment. I am not ready to say any method is our way forward without
some deeper understanding of the options, divorced from marketing
solutions.

And finally from HT-Lab:
I would suggest you start by reading the free Verification Cookbook from
Mentor (http://www.mentor.com/products/fv/_3b715c/). This cookbook describes
a comprehensive structured test environment. Although they use
SystemVerilog(OVM/AVM)/SystemC(AVM only) you can do a lot of these
techniques in VHDL albeit with a "bit more" programming effort.

Other suggestion is to look at training companies like Doulos and Synthwork,
they have special VHDL verification courses were they discuss techniques
like constrained random(CR), transaction level modelling(TLM), some
functional verification, bus functional models etc.

I will look into your suggestions. I haven't been convinced of how a
sequential coding language is going to help for our particular
products. Invariably, the examples are things like "verify a
microprocessor". Well that's great and all if you have a
microprocessor, where things like instructions, writing, and reading
are all sequential things you can do to the device. We have things
like "When enabled, this external chip provides a continuous feed of
image data to the DUT which then gets processed in some fashion and
comes out the other side continuously in a different format." I can
think of a few ways to do it, but they don't seem to lend themselves
readily to that sort of model. I'd love to see fewer microprocessor
examples and a few more "weird" item examples in most literature :).
I would also suggest you look into assertions. Unfortunately getting a PSL
license for most VHDL simulators is not going to be cheap (to put it
mildly), however, the verification benefits you get from assertions is
huge. The next step would be to use a proper formal tool ($$$) like
Solidify/0-in and many others and forget about creating stimuli altogether
:). If you can't afford a PSL license then look into the free OVL
assertion/monitors library.

I believe we do have access to SystemVerilog and I understand those
use assertions. The company keeps their license bin stocked with
QuestaSim which (if I remember correctly) is ModelSim plus
SystemVerilog and other verification utilities. However, it's
meaningless if one doesn't know how to use it.

Also, that being said, we do use VHDL assertions pretty liberally
already. I'm not sure what another language's assertions are going to
do on top of that. Perhaps there's a gap I'm not aware of yet due to
my verification naivete ;-).

Anyhow, it does seem like most paths lead to Mentor's documentation.
If there's a good verification book though, please let me know. I
would like to get as broad a grounding in it as I can so I can be well
armed against management reticence ;-).

Best regards,
Mark Norton
 
M

M. Norton

Mark,> Anyhow, it does seem like most paths lead to Mentor's documentation.

I don't know of any great book on VHDL Verification (yet).  Ben Cohen has
written some books that address verification to some degree.  There are
a number of classic Verification books like Janick's that focus on techniques
and other languages.

Well I might need to dig out that book. From reading some of the
other material (listed earlier) I need to jump way back. There seem
to be a number of methods (the constrained random stimulus is
baffling, at least trying to see what sense it makes for the
particular example in my head) and I don't fully undertand them all,
let along languages or tools to implement said method.
I am currently working on a book titled, Verification for VHDL, with Elsevier,
however, my part will not be done until late this year and then they have to
review it and do their part.  For the time being, we (SynthWorks) offer training
that includes essential verification techniques such as self-checking,
transaction-based testing, data structures (linked-lists, scoreboards, memories),
and randomization.  Seehttp://www.synthworks.com/vhdl_testbench_verification.htm
For some of the techniques covered in the class, also see my DesignCon 2003 paper
titled, "Accelerating Verification Through Pre-Use of System-Level Testbench
Components" athttp://www.synthworks.com/papers/index.htm

Interestingly enough it seems you are local to us (for what it's
worth, I work for Rockwell Collins, Portland-OR) but I don't know what
the possibility is for getting a training seminar money.
We do not currently cover DO-254 topics though, so you will have to go
to one of the other vendors (there are at least two that I know of doing
this).

To be perfectly honest, I wouldn't bother. As I said, as long as
you're already doing the right sort of things (developing good
requirements, holding design reviews, verifying the design,
controlling changes) I can summarize DO-254 in a single sentence.
WRITE EVERYTHING DOWN. If you aren't already doing the right sorts of
things, then DO-254 basically says: DO THEM. Which mainly boils down
to, develop good requirements, hold design reviews, verify your
design, control changes, and write everything down. (A seminar in a
paragraph.)

Now, all the topics that Mentor and Telelogic seem to cover are
automated ways of writing everything down. However, that's less of a
DO-254 topic and more of a marketing topic.

Personally I'm back at the level of "I don't know if our methodology
for creating verification is particularly robust." There was a white
paper at Mentor on the use of advanced verification methods to address
DO-254 design assurance. In that paper, there's a line that says
"Traditionally, verification engineers applied test patterns to the
device that contained predetermined, expected results. In this
methodology, each test targeted a specific function of the device and
completed as soon as the function was confirmed. This process leads
to numerous largely redundant tests with long regression cycles."
THAT's where I'm at. We are, even now, creating a test bench for a
medium-to-large design that basically is switching modules on and off
for testing specific things and will have expected results per test.
Is this a good method? I don't know, but I'd like to know if there
are options.

So perhaps the Janick book is a more basic place to start on advanced
verification, rather than the toolsets or the seminars.

(Interestingly enough, the paper I mentioned was about a project AT
Rockwell Collins... just goes to show you how much communication we
have with the mothership way out here in the forest! ;-)).

Best regards,
Mark Norton
 
Ad

Advertisements

M

Marcus Harnisch

Hi Mark

The main reason I mentioned this seminar wasn't the tool itself, but
the Mentor engineer giving a brief introduction into current
verification approaches, which most people in the audience were not
familiar with. Not sure if I will ever board a plane again after this
frightening experience... ;)

M. Norton said:
To be perfectly honest, I wouldn't bother. As I said, as long as
you're already doing the right sort of things (developing good
requirements, holding design reviews, verifying the design,
controlling changes) I can summarize DO-254 in a single sentence.
WRITE EVERYTHING DOWN. If you aren't already doing the right sorts of
things, then DO-254 basically says: DO THEM. Which mainly boils down
to, develop good requirements, hold design reviews, verify your
design, control changes, and write everything down. (A seminar in a
paragraph.)

Now, all the topics that Mentor and Telelogic seem to cover are
automated ways of writing everything down. However, that's less of a
DO-254 topic and more of a marketing topic.

I do think that tools like these are helpful indeed. But if a company
could spare an engineer, they could write their own database -- this
is no rocket science (pun intended).

The tool that's been introduced parses text documents (in the broadest
sense -- source code, .doc, Framemaker?, etc.) for pragma-like strings
(hidden in source code comments) and fills a database with whatever it
could find. Requirements, tests, functional coverage setup. Everything
will be cross-referenced. Dependencies will be analyzed and
visualized. I believe the Questa UCDB API will be used to access
coverage results.

Example: I can select a requirement of my project and see cross
references to all parts of my source code implementing it. I will be
pointed to the corresponding section in my verification plan and all
assertions/checkers relating to this. Missing links will be reported.

I believe that most of the functionality that had been presented in
the seminar could be implemented with basic knowledge of databases,
some scripting language, Graphviz, and perhaps a web-browser as
poor-man's GUI.

As usual, a decision has to be made whether it is more economic to buy
a canned solution or roll your own.
Personally I'm back at the level of "I don't know if our methodology
for creating verification is particularly robust." There was a white
paper at Mentor on the use of advanced verification methods to address
DO-254 design assurance. In that paper, there's a line that says
"Traditionally, verification engineers applied test patterns to the
device that contained predetermined, expected results. In this
methodology, each test targeted a specific function of the device and
completed as soon as the function was confirmed. This process leads
to numerous largely redundant tests with long regression cycles."
THAT's where I'm at. We are, even now, creating a test bench for a
medium-to-large design that basically is switching modules on and off
for testing specific things and will have expected results per test.
Is this a good method? I don't know, but I'd like to know if there
are options.

The (simplified) difference between directed and
pseudo/constrained-random verification is that the former finds bugs
you expect, while the latter finds bugs that you don't expect. Those
are usually the ones that bite you months later in the field.

In practice you will have a mixture of both approaches. It should be
noted, however, that a directed test could be considered a
constrained-random test with extremely tight constraints.

Best regards
Marcus
 
M

M. Norton

The main reason I mentioned this seminar wasn't the tool itself, but
the Mentor engineer giving a brief introduction into current
verification approaches, which most people in the audience were not
familiar with. Not sure if I will ever board a plane again after this
frightening experience... ;)

Sheesh, everyone says that ;-). We do a good job. I just think we
could do a better job with more sophisticated techniques (which is
what I'm trying to find out.) Some of the impetus to improve does
come from the certification agency that will be (and should be) asking
tougher questions than they have in the past.

However, I think I would benefit more from material that is more aimed
at the modern verification approaches in detail. And I'm getting
there. There have been some good resources pointed out.
I do think that tools like these are helpful indeed. But if a company
could spare an engineer, they could write their own database -- this
is no rocket science (pun intended).

Certainly they have value, but at that point we have left the realm of
pure verification and entered process management. Essentially all the
database routines are writing everything down. Now we have entered
the realm of "everyone has a widget to sell" and that's fine; it makes
the world go around. I just believe in cutting through the
rhetoric.
I believe that most of the functionality that had been presented in
the seminar could be implemented with basic knowledge of databases,
some scripting language, Graphviz, and perhaps a web-browser as
poor-man's GUI.

But... it's not verification. It's process management. So, it
extends past my purvue and interest at the moment.
The (simplified) difference between directed and
pseudo/constrained-random verification is that the former finds bugs
you expect, while the latter finds bugs that you don't expect. Those
are usually the ones that bite you months later in the field.

In practice you will have a mixture of both approaches. It should be
noted, however, that a directed test could be considered a
constrained-random test with extremely tight constraints.

Yes, I think the issue is that I think of these methods and try to
immediately wrap them around our current devices and the fit is not
always perfect.

For instance, we have one FPGA that receives, upon request, lines of a
video frame. It combines the lines in a predetermined method,
modifies them in a predetermined method, and spits them out upon
request. Those in and out interfaces are not random and cannot be
random; they are tightly tied to a frame rate and data rate from their
video sources. At this point, it seems to me that applying a
constrained random input loses a lot of meaning. However, there are
many tools in the toolbox and perhaps the transactional model makes
more sense. ("This frame, use image A as an input, next image A+1,
now use image B...")

However, the computer interface might be random, and one might see
what effect random control and status reads have while all of this
image processing is going on.

I just need to identify all the tools and learn how to use them. Not
everything is a nail.

Anyhow I do appreciate all the help. I think I have some pointers on
which to continue.

Best regards,
Mark Norton
 
M

Marcus Harnisch

M. Norton said:
For instance, we have one FPGA that receives, upon request, lines of a
video frame. It combines the lines in a predetermined method,
modifies them in a predetermined method, and spits them out upon
request.

How do you ensure that your sequence of frames is exhaustively testing
the internal logic? Can you prove it? You'd have to come up with
specific sequences consisting of artifical frames to catch corner
cases in your logic. And that would be only those corner cases that
you are aware of already! What about all the unknown corner cases?

After some significant (expensive!) bench time, wrecking your head
about an exhaustive test sequence to verify a particular portion of
the logic deep in the design, what if a designer applies a "minor"
change rendering your entire effort useless?

Randomly generated images have a much higher chance of catching those
issues "by chance".

Think of this real world analogy: A software engineer comes up with a
really cool application(tm). He tests it thoroughly activating every
function and verifying that it works. Proudly he puts it on display at
the next trade show. Joe Blow, hoping for some gadgets to collect (USB
sticks, flyers, everything he can get), sees the computer with the
really cool application and starts wiggling the mouse, pushing buttons
causing the application to crash.

It may take a few of these Joe Blows to exercise a specific sequence
of mouse clicks that would trigger a known issue you didn't have time
to fix before the show, but rest assured they *will* break the
application one way or another.
Those in and out interfaces are not random and cannot be random;
they are tightly tied to a frame rate and data rate from their video
sources. At this point, it seems to me that applying a constrained
random input loses a lot of meaning.

Could very well be the case -- I don't know your applicatuion after
all. But remember what I wrote above.
However, there are many tools in the toolbox and perhaps the
transactional model makes more sense. ("This frame, use image A as
an input, next image A+1, now use image B...")

Not sure if this makes sense in your application, but what if a
different image sequence triggered a bug?

Besides, transactional models and random don't contradict. Just two
different buzzwords.
However, the computer interface might be random, and one might see
what effect random control and status reads have while all of this
image processing is going on.

That's why I said that normally a mixed approach is used. Although,
rather than directed tests, I would still choose random test with the
constraints pulled tighter:

"Generate image sequence consisting of two images from the A series,
followed by one image from B series"

This will very likely create the sequence you were looking for in
reasonable time and also create other interesting combinations.

Since you were apparently expecting a corner case with A,A+1,B collect
coverage on that specific sequence or on all combinations of two As,
followed by one B.

Regards
Marcus
 
M

M. Norton

How do you ensure that your sequence of frames is exhaustively testing
the internal logic? Can you prove it? You'd have to come up with
specific sequences consisting of artifical frames to catch corner
cases in your logic. And that would be only those corner cases that
you are aware of already! What about all the unknown corner cases?

Points taken. Obviously impossible to know from what I've stated, but
the content of the frames does not matter, by-and-large, for the
processing. We are taking the images and then performing an algorithm
on the image to distort it such that it appears flat when viewed
reflected off of a curved surface. It could be a dog, or a cat, or
(more likely in our case) a runway.

Still your main point, I believe, is to not underestimate the
potential for randomness and that I understand. In this particular
instance, I suppose there might be something to gain for random pixels
and then performing a reverse-distortion (if possible... not entirely
certain the distortion function is fully reversible in a one-to-one
function mapping sort of way) and verifying that the execution of the
distortion algorithm is executed precisely on the dog, the cat, or the
runway. I believe in the past many of these visual quality issues are
managed at a systems level and I'm not entirely certain how they test
for it there.
Not sure if this makes sense in your application, but what if a
different image sequence triggered a bug?

Besides, transactional models and random don't contradict. Just two
different buzzwords.

Well, I would hesitate to call them buzzwords. They have semantic
content distinguishing two differing methods of applying stimulus to a
design. ;-) I could call it flying-pink-dragon-method-alpha and
carrot-mode, but that wouldn't be terribly productive.

But, I've been reading Anathem (by Neal Stephenson) lately and thus
such things have been on my mind quite a lot.

Best regards,
Mark Norton
 
Ad

Advertisements

M

Marcus Harnisch

M. Norton said:
Well, I would hesitate to call them buzzwords. They have semantic
content distinguishing two differing methods of applying stimulus to a
design. ;-)

No. These are not two different methods to apply stimulus. Both can be
used together. Most modern verification methodologies rely on
transactions that have been generated using a constrained random
approach.

Regards
Marcus
 
Ad

Advertisements


Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top