OS-VVM crosscoverage vs directed testing

A

alb

Hi everyone,

I'm not sure this is the best place to post this thread since is more
related to verification than vhdl itself, but since the OS-VVM is a vhdl
package I figured people here might also be interested in the topic
(please direct me elsewhere if needed).

I've followed the Aldec webinar on OS-VVM (see:
http://www.aldec.com/en/support/resources/multimedia/webinars) and found
it quite interesting, to the point that I'll definitely give OS-VVM a try.

I nevertheless have some doubts/comments about the cross coverage part.
The webinar presents an ALU design as a DUT to introduce the concept of
cross coverage of the two input registers in order to test all the
combinations, but why then not a two 'for loop' to generate all the cases:

<code>

-- ACov is variable of CovPType
for i in reg1'range loop
for j in reg2'range loop
DoAluOp(i, j); -- do transaction
ACov.ICover((reg1, reg2)); -- collect coverage
end loop;
end loop;

</code>

The webinar goes on on how you can optimize the coverage using what they
call 'intelligent coverage' which focuses on holes coverage.

Would the for loop above be less efficient than the intelligent cross
coverage proposed (in terms of simulation cycles)?

Moreover the 'intelligent coverage' is fulfilled with the call:

ACov.AddCross(2, GenBin(0,7), GenBin(0,7) );

where the first parameter is an 'AtLeast' parameter. Shouldn't it be
'AtMost'??? If you want each cross case to be hit not more than a
certain amount of time (to reduce the 'logN' factor of a randomly
generated pair of values) than I do not see why you have to specify an
'AtLeast' parameter... Am I missing something?

As a general comment, wouldn't it be more appropriate to find a better
suited example to show the real power of cross coverage [1]?

Any pointer to freely available code that illustrates the usage of
OS-VVM is extremely helpful.

Cheers,

Al

[1] I do not have any to provide and I'm willing to see what is the real
advantage about having the bins randomly filled instead of sequentially
filled with direct cases.
 
G

goouse99

Am Donnerstag, 25. Juli 2013 11:12:25 UTC+2 schrieb alb:
Hi everyone,



I'm not sure this is the best place to post this thread since is more

related to verification than vhdl itself, but since the OS-VVM is a vhdl

package I figured people here might also be interested in the topic

(please direct me elsewhere if needed).



I've followed the Aldec webinar on OS-VVM (see:

http://www.aldec.com/en/support/resources/multimedia/webinars) and found

it quite interesting, to the point that I'll definitely give OS-VVM a try..



I nevertheless have some doubts/comments about the cross coverage part.

The webinar presents an ALU design as a DUT to introduce the concept of

cross coverage of the two input registers in order to test all the

combinations, but why then not a two 'for loop' to generate all the cases:



<code>



-- ACov is variable of CovPType

for i in reg1'range loop

for j in reg2'range loop

DoAluOp(i, j); -- do transaction

ACov.ICover((reg1, reg2)); -- collect coverage

end loop;

end loop;



</code>



The webinar goes on on how you can optimize the coverage using what they

call 'intelligent coverage' which focuses on holes coverage.



Would the for loop above be less efficient than the intelligent cross

coverage proposed (in terms of simulation cycles)?



Moreover the 'intelligent coverage' is fulfilled with the call:



ACov.AddCross(2, GenBin(0,7), GenBin(0,7) );



where the first parameter is an 'AtLeast' parameter. Shouldn't it be

'AtMost'??? If you want each cross case to be hit not more than a

certain amount of time (to reduce the 'logN' factor of a randomly

generated pair of values) than I do not see why you have to specify an

'AtLeast' parameter... Am I missing something?



As a general comment, wouldn't it be more appropriate to find a better

suited example to show the real power of cross coverage [1]?



Any pointer to freely available code that illustrates the usage of

OS-VVM is extremely helpful.



Cheers,



Al



[1] I do not have any to provide and I'm willing to see what is the real

advantage about having the bins randomly filled instead of sequentially

filled with direct cases.



--

A: Because it fouls the order in which people normally read text.

Q: Why is top-posting such a bad thing?

A: Top-posting.

Q: What is the most annoying thing on usenet and in e-mail?


Hi Al,
doing verification with Constraint Random Stimuli and Coverage Methods sometimes needs a second thought to fully understand it.

So let's do some discussion of the things you pointed out.

In your nested loop example you iterate over all possible input combinations.
For some small ALU this might be acceptable. But how about some wide input device (e.g. >64 bit).
It takes just too much time to create all the stimuli, not to mention the simulation time to get the results.
While in most cases only a certain (hopefully small) ammount of stimuli would be sufficient for a full coverage, the problem is always to find these.

Constraint Random Stmuli generation and Coverage are one approach to reducethe ammount of stimuli that a simulation has to use.
Especially Cross-Coverage can be a mighty tool, since the bining can be multidimensional.
According to the number of hits in selected bins the tesbench can adapt therandomisation of the stimuli patterns. So if it becomes obvious that some part of the circuit has been excessively stimulated, the randomization constraints can be modified to reduce the probabillity of triggering that circuit. Therefore the chance that other parts of the circuit are triggered by some stimuli pattern rise.
Still it can not be excluded that the already tested circuit part becomes triggered again onvce in a while.
That also explains why there is just a definition for a minimum number of hits in the AddCross function. You simply can't help to get additional hits in that bin (except for defined disabling, but that's a badly chosen constraint).

Here we are currently working with SystemVerilog and using some OpenCores FPU model as DUT.
This FPU has an Operator Selection and a Rounding Mode Selection.
The Operations do not use the full possible set of binary combinations the input offers. So there are several unused opcodes.
If you do a cross-coverage of the stimuli there might appear hits for the bins defined for the unused opcodes.
While these have to be tested, for full coverage (actually the original FPUstalls when that happens) you won't want to waste too much stimuli on these cases.
So you can reduce the probability to generate unused opcodes, after a defined number of hits have been recognized.

One might ask, why not set the constraints that way from the beginning.
A possible answer could be: To acheive as many simple test cases in the test plan as early as possible.
And like the ALU even the FPU is some very straightforward functioning device.
More complex systems might switch into differnt states during runtime at a hardly predictable rate. So the testbench might need to be able to adapt tothis. But these would be quite difficult examples, hardly suited to grasp the concepts.

Back to the loop-example from the beginning:
You asked about the efficiency compared to CR and coverage.
Well. due to the random properties of the stimuli generation this can not be said in a determinable way.

Just some dumb example (because every case is different).
If i-max and j-max are needed to acheive full coverage the loop has to run to his very end, which can take a looooong time.
With some luck the random stimulis create this combination very early and you acheive full coverage after ridiculously short simulation time.
But with Murphy as your team mate it may also happen that this stimuli never ever appears. (e.g. due to bad constraints)

So Constraind Random Testing gives you a chance (but not a guarantee) to acheive full coverage faster.

One way to improve your luck could be to start the testbench on multiple computers using different randomisation seeds.
You might observe that the acheived goals of the testplan appear in a different sequence and at different times on the multiple machines, and actuallyyou can merge the coverage results to acheive full coverage and thus fulfilling your test plan.

I hope you could find some useful ideas in my explanations.

Have a nice simulation (or should I say verification here?)
Eilert
 
A

Andy

Al,

To fit in the time allowed, examples and demonstrations are often simplified, which in this case also means that a simple directed test would have been just as easy and effective to exhaustively test the same features of the DUT (register selection).

In a real verification effort, the test would also need to test different values in the registers, along with different operations performed by the ALU. Exhaustively testing every possible combination of these conditions might take too long to be practical.

When exhaustively covering every possible condition is not practical, CRS allows you to cover more or fewer conditions by just running the simulation for longer or shorter times (trying more or fewer random stimuli along the way), rather than having to modify the stimulus generation code to generatemore or fewer different stimuli.

The advantage of Constrained Random Stimulus is not usually a reduced number of stimuli generated and simulated.

Rather, especially for complex DUTs, the advantage of CRS is the relative ease of defining LOTS of stimuli in different orders, with different values and combinations of values, etc., without defining (and running) ALL possible stimuli.

I look at CRS as being kind of like when your spouse sends you to the storeto buy some milk: You could get in your car and drive to the store, get the milk, and bring it home quickly and efficiently: mission accomplished. Oryou could walk to the store, not necessarily by the shortest path, and you might find some nice flowers along the way, and take them home too: mission also accomplished, but with a bonus!*

CRS is about exposing unexpected conditions on your way to testing the expected conditions.

Where intelligent coverage hepls is reducing the total number of stimuli (and therefore runtime) from Nlog(N) to just N, to reach the desired coverage, by shaping the random generation of future stimuli to avoid generating additional stimuli that exceed already covered goals.

Just running the same directed tests over and over on the same DUT does nottell you anything. Running CRS tests over and over (with new randomizationseeds) on the same DUT keeps testing new conditions, and therefore provides new and valuable information. And those multiple tests can be run in parallel on multiple machines, or in series over nights and weekends on one or more machines.

*Unless your spouse is allergic to the flowers, or REALLY wanted the milk NOW!

Andy
 
J

Jim Lewis

Hi Al,
I'm not sure this is the best place to post this thread since is more ...
Here is fine. Two other good resources:
http://osvvm.org/ -- has a bulletin board and such for Q & A
http://www.synthworks.com/blog/osvvm/ -- blog focused on OSVVM by Me (the Principal OSVVM developer).

I've followed the Aldec webinar on OS-VVM (see:
http://www.aldec.com/en/support/resources/multimedia/webinars) and found
it quite interesting, to the point that I'll definitely give OS-VVM a try..
Note, there are several there OSVVM webinars and the recent one from July 18, 2013 is on page 2.

The webinar presents an ALU design as a DUT to introduce the concept of
cross coverage ...
The ALU example is intended to be simple and quick to understand. Your code with loops is just as efficient as the Intelligent Coverage. One difference is that if you change the coverage goal to 10, then your test repeats the same pattern, however, Intelligent Coverage does not. For designs that potentially have order dependencies, this is important. For testing an ALU, this is not so important.

An ALU is just one example. Each bin had only one value in each range. Since OSVVM allows functional coverage to be defined incrementally with sequential code, creating a non-symmetric functional coverage model is simple. Consider the weighted coverage example. I can further extend it to allow additional parameters, such as a ranged data value. The intent of this parameter is solely for test generation.
Bin1.AddBins( 70, GenBin(0), GenBin(1, 255, 1) ) ; -- Normal Handling, 70%
Bin1.AddBins( 20, GenBin(1), GenBin(10, 15, 1) ) ;
Bin1.AddBins( 10, GenBin(2), GenBin(21, 32, 1) ) ;


See Matthias Alles blog post http://osvvm.org/archives/550 for how he is using it and why the randomization provides a more realistic test than looping.

Intelligent Coverage is a step up from SystemVerilog's Constrained Random. For the same problem, Constrained Random will take on average N*LogN iterations. Even in this little problem, my constrained random example took 315iterations vs the 64 iterations of Intelligent Coverage (or of your looping).

Moreover the 'intelligent coverage' is fulfilled with the call:
ACov.AddCross(2, GenBin(0,7), GenBin(0,7) );
where the first parameter is an 'AtLeast' parameter. Shouldn't it be
'AtMost'???

AtLeast means a coverage model needs to see a value in the bin at least that number of times to consider the model covered. The term is borrowed fromSystemVerilog and 'e'. From a constrained random perspective, to considera coverage model "covered", all bins must have >= their AtLeast value. Since constrained random approach generates extra values (approximately LogN of them), some bins will have more than their AtLeast value.

From a Intelligent Coverage perspective, the bins get exactly the AtLeast value. So perhaps it would have been more appropriate to name it CoverageGoal. Hard to have that hind sight. The OSVVM functional coverage has evolved over several years now and it did not originally have the Intelligent Coverage feature. Instead, it used the Constrained Random based on sequential code and randomization (similar to the constrained random pattern shown in the presentation).

Note that @Eilert's response is based on a SystemVerilog perspective that only has Constrained Random. In OSVVM, we don't do a random trial of different seeds to try to improve our coverage.
Any pointer to freely available code that illustrates the usage of
OS-VVM is extremely helpful.
What you saw in the presentation is just the tip of the iceberg.

Here is the "sell" part.
OSVVM is based on methodology and packages SynthWorks developed for our VHDL Testbenches and Verification class. In this class, we provide a supersetof the OSVVM packages that facilitate transaction level modeling (tlm), self-checking, scoreboards, memory modeling, synchronization methods, functional coverage, and randomization. Our modeling approach is accessible by both verification and RTL designers. Class details are here:
http://www.synthworks.com/vhdl_testbench_verification.htm

We offer the class online, at a public venue, or on-site. Our class schedule is below. You can also find links to more information about our instructor led online classes.
http://www.synthworks.com/public_vhdl_courses.htm#VHDL_Test_Bench_Training

Best Regards,
Jim
 
A

alb

Hi Eilert,

On 25/07/2013 12:53, (e-mail address removed) wrote:
[]
In your nested loop example you iterate over all possible input
combinations. For some small ALU this might be acceptable. But how
about some wide input device (e.g. >64 bit). It takes just too much
time to create all the stimuli, not to mention the simulation time to
get the results. While in most cases only a certain (hopefully small)
ammount of stimuli would be sufficient for a full coverage, the
problem is always to find these.

I agree with you that the number of combinations do explode if the
length of the registers increase. But you can still do binning without
any extra package just by manipulating on the ranges of the
aforementioned for loop.

Certainly if you do not randomly select an input you end up simulating
only one 'edge' of the bin (with bin size larger than one value only),
while with multiple simulation runs and a
randomly generated input you can cover the bin more uniformly. But in
the end this does not matter that much since the choice of binning
should be driven by the test plan and if you have covered a bin (even
once) you should be happy.
Constraint Random Stmuli generation and Coverage are one approach to
reduce the ammount of stimuli that a simulation has to use.

Do we agree that binning can be done with directed cases as well as CR?
If this is so then as long as you hit each bin once (or whatever number
of times) than you are done.

I do not see how having selected a random number in a bin (that I
define) makes me reduce the amount of stimuli... I am clearly missing
something.
Especially Cross-Coverage can be a mighty tool, since the bining can
be multidimensional. According to the number of hits in selected bins
the tesbench can adapt the randomisation of the stimuli patterns. So
if it becomes obvious that some part of the circuit has been
excessively stimulated, the randomization constraints can be modified
to reduce the probabillity of triggering that circuit. Therefore the
chance that other parts of the circuit are triggered by some stimuli
pattern rise. Still it can not be excluded that the already tested
circuit part becomes triggered again onvce in a while. That also
explains why there is just a definition for a minimum number of hits
in the AddCross function. You simply can't help to get additional
hits in that bin (except for defined disabling, but that's a badly
chosen constraint).

I guess I was wrong in calling the 'AtLeast' parameter 'AtMost', being 0
a value included in an 'AtMost' condition while excluded in an 'AtLeast'
condition.

But I should add that since 'intelligent' coverage constrains values to
be randomly selected across _holes_ in the coverage matrix, you are
actually saying: "once bin (k,l) has been hit N times than exclude it
from the set of available bins". There's no 'AtLeast' since you cannot
hit the fulfilled bin again.

Can you trace which input registers have 'triggered' a certain portion
of a circuit/code? If this is the case I see your point, but if you need
to do it manually varying the weights (first with a uniform weighting
and then changing bin weights one by one) than is still a real PITA.

I was not aware that code coverage could be somehow 'linked' to
functional coverage since the two metrics are (in principle) completely
independent.
Here we are currently working with SystemVerilog and using some
OpenCores FPU model as DUT. This FPU has an Operator Selection and a
Rounding Mode Selection. The Operations do not use the full possible
set of binary combinations the input offers. So there are several
unused opcodes. If you do a cross-coverage of the stimuli there might
appear hits for the bins defined for the unused opcodes. While these
have to be tested, for full coverage (actually the original FPU
stalls when that happens) you won't want to waste too much stimuli on
these cases. So you can reduce the probability to generate unused
opcodes, after a defined number of hits have been recognized.

Isn't the weighting process as tedious as writing 'directed' test cases?
And if you *have* to run a simulation to verify the unused opcode is
hit, than the amount of simulations you need is the same.

The main difference I see is on a parallel/multiple simulation runs,
since a directed testcase will always test the same case, while a CR
will likely test different cases every time, adding coverage on the long
run.
One might ask, why not set the constraints that way from the
beginning. A possible answer could be: To acheive as many simple test
cases in the test plan as early as possible.

In this respect binning is as crucial as looking for the minimum set of
test cases. What I do believe though is that with CR you may
inadvertently hit a corner case faster than in directed testing, but I
can argue that if this is the case than your testplan was missing
corner cases.

[]
Back to the loop-example from the beginning: You asked about the
efficiency compared to CR and coverage. Well. due to the random
properties of the stimuli generation this can not be said in a
determinable way.

Not determinable but predictable with some level of confidence. The
webinar refers to an N * log(N) number of iterations to cover every bin
(where N is the number of bins) but I need to mention that this is valid
for a certain level of confidence, meaning you still may have
statistical fluctuations.
Just some dumb example (because every case is different). If i-max
and j-max are needed to acheive full coverage the loop has to run to
his very end, which can take a looooong time.

If they are needed why don't you have a direct test case for that
instead of running till the very end?

Again here what I said on parallel simulations may apply. In directed
testing you'll always test the same cases, without adding any new
coverage if you do not change/add test cases. This is not true for CR.
With some luck the
random stimulis create this combination very early and you acheive
full coverage after ridiculously short simulation time.

I may say ridiculously long and I bet our chances are exactly the same,
unless you cheat and use weighting ;-)
But with
Murphy as your team mate it may also happen that this stimuli never
ever appears. (e.g. due to bad constraints)

Uhm... I don't seem to see the point. You need to have i-max and j-max
in your test plan and you decide to play dices to hit that case. Why not
simply have a direct case instead?

[]
One way to improve your luck could be to start the testbench on
multiple computers using different randomisation seeds. You might
observe that the acheived goals of the testplan appear in a different
sequence and at different times on the multiple machines, and
actually you can merge the coverage results to acheive full coverage
and thus fulfilling your test plan.

In this case I fully agree with you. And as of now it is the only
convincing argument to go for CR. Running on multiple computers (i.e.
running on the same computer multiple times) will certainly shorten
verification because your simulations are incrementally covering more
without the need to add test cases.
I hope you could find some useful ideas in my explanations.

I hope you find my counter arguments as interesting as I found yours.

Al
 
A

alb

Hi Jim,

On 26/07/2013 01:35, Jim Lewis wrote:
[]
Note, there are several there OSVVM webinars and the recent one from July 18, 2013 is on page 2.

Thanks for the hint! To be honest I superficially thought the order was
such that new stuff would appear on the first page...
The ALU example is intended to be simple and quick to understand.
Your code with loops is just as efficient as the Intelligent
Coverage.

Ok, now we are on the same page!
One difference is that if you change the coverage goal to 10, then
your test repeats the same pattern, however, Intelligent Coverage
does not. For designs that potentially have order dependencies, this
is important. For testing an ALU, this is not so important.

Shouldn't designs that have potential order dependency have a test plan
that explicitly cover the case? Your example actually suits perfectly my
mental model, i.e. CR may find 'bugs' in the verification plan since it
broadens the scenarios w.r.t. directed testing.

[]
See Matthias Alles blog post http://osvvm.org/archives/550 for how he
is using it and why the randomization provides a more realistic test
than looping.

The post is extremely useful indeed. When your device needs to operate
in randomly changing conditions then I agree that directed cases are
limiting the scenarios, effectively missing possible critical ones
(again a hole in your verification plan though).
Intelligent Coverage is a step up from SystemVerilog's Constrained
Random. For the same problem, Constrained Random will take on
average N*LogN iterations. Even in this little problem, my
constrained random example took 315 iterations vs the 64 iterations
of Intelligent Coverage (or of your looping).

I'm trying to understand from a statistical standpoint where the N *
logN comes from, but I have a storm of physicists around me and I'm sure
I'll get a reasonable answer on that ;-)
[]
From a Intelligent Coverage perspective, the bins get exactly the
AtLeast value. So perhaps it would have been more appropriate to
name it CoverageGoal. Hard to have that hind sight. The OSVVM
functional coverage has evolved over several years now and it did not
originally have the Intelligent Coverage feature. Instead, it used
the Constrained Random based on sequential code and randomization
(similar to the constrained random pattern shown in the
presentation).

Getting the correct naming is not always straight forward. Now I see the
heritage behind the name and changing the name result in lack of back
compatibility.
Note that @Eilert's response is based on a SystemVerilog perspective
that only has Constrained Random. In OSVVM, we don't do a random
trial of different seeds to try to improve our coverage.

I'm sorry but I believe that is a counter argument against the case you
previously suggested though. When randomly changing conditions matter
then I believe that full coverage changes meaning (and this is also
valid for Matthias Alles's example): covering a set of bins is not
sufficient anymore. You need to include sequencing.

Increasing the number of trials certainly increase the 'list of
sequences' that your DUT is experiencing, therefore does increase
coverage, simply the test plan was not enough detailed to specify how
critical might have been a certain set of sequences.

Bare in mind though that, considering the state of your DUT depends on
the Mth previous state and you have N possible (unique) bins, the number
of possible 'sequences' which might experience your device is
N!/(N-M)!... I guess you'll need more than just 'intelligent' coverage
to handle that ;-)
 
A

Andy

Al,

Yes, as long as you meet your coverage goals, you can generate the stimulusany way you want, and achieve the same results.

Now, lets look at ways to generate such stimulus. You could use a directed loop to generate your stimulus. It is simple enough to do, assuming you want/need (and have the time to simulate) complete coverage (all possible combinations).

If you only have time to simulate partial coverage, how do you decide, and then code a stimulus generator to produce, the coverage you seek? How do you verify that it is giving you the coverage that you want? You might want acoverage model for that...

You could use constrained random methods to generate the stimulus, independently of the coverage model. It may a little more time to code (especially if you are seeking 100% coverage), and maybe a bit more time to sim (generating random numbers is fast, but not as fast as a loop counter). But at least it would run in different orders (in case there is some dependency on order). And then you really do need a coverage model to verify that your random stimulus is covering what you wanted.

Why can't we just define the desired coverage in the first place, and then somehow use the coverage model to generate stimulus to meet the coverage?

We can!

To me, this is the HUGE benefit of OSVVM-style intelligent coverage: OSVVM provides the ability to use the coverage model itself to generate the stimulus, efficiently and randomly.

Andy
 
A

alb

Hi Andy,

On 26/07/2013 17:30, Andy wrote:
[]
If you only have time to simulate partial coverage, how do you
decide, and then code a stimulus generator to produce, the coverage
you seek? How do you verify that it is giving you the coverage that
you want? You might want a coverage model for that...

Now things get a little bit more interesting... It's not just a matter
of implementation, rather a definition of a model to meet the ultimate
goal: is the DUT performing per the specifications?

From specs to coverage model is yet another piece of the verification
process that is fundamentally critical. If in the coverage model there's
no item/point for 'out of order' packets/configurations/states, then we
might mistakenly reach full coverage without having tested the device fully.
You could use constrained random methods to generate the stimulus,
independently of the coverage model. It may a little more time to
code (especially if you are seeking 100% coverage), and maybe a bit
more time to sim (generating random numbers is fast, but not as fast
as a loop counter). But at least it would run in different orders (in
case there is some dependency on order). And then you really do need
a coverage model to verify that your random stimulus is covering what
you wanted.

Are you saying that no matter what the coverage model is I could forget
about it and just throw random stimuli at my device? Uhm, but as you
said I would need a coverage model to understand what did I cover,
therefore I better have a coverage model in the first place.
Why can't we just define the desired coverage in the first place, and
then somehow use the coverage model to generate stimulus to meet the
coverage?

We can!

To me, this is the HUGE benefit of OSVVM-style intelligent coverage:
OSVVM provides the ability to use the coverage model itself to
generate the stimulus, efficiently and randomly.

Please don't take me wrong, I'm not advocating against OSVVM's
intelligent coverage, I'm just trying to understand what is the
advantage of a random stimuli w.r.t. directed cases if you have _fully_
specified the desired coverage.
 
A

alb

Hi everyone,

sorry for top posting my own post, but during the weekend I guess I
figured something on this topic that might be worth sharing. You may
ignore this message if you're not interested in metaphors or analogies
with real life experience or parabolas... :)

Last saturday I bought a nice set of teak chairs for my little terrace
and the clerk at the shop warned me about painting them with protective
oil to make them more resistant to moisture, rain, stains...

I grabbed a bucket of teak oil and stared at my chair wondering where to
start from (I was making my 'plan'), in the end I wanted to have my
chair fully painted ('covered') with oil, that was my 'goal'.

I was thinking that maybe I could have sketched down the chair elements
(sit, arm rests, legs, ...) and paint them one by one, but I figured
that it would have been a little too much for such a task, therefore I
started 'randomly' painting here and there.

I probably ended up in passing multiple times on the same spots and
after a visual inspection I realized I missed a couple of spots which
were then carefully painted. I was done, my set of chairs fully painted,
my back a little bit sore and my wife happily smiling.

In the end I realized that if I had sketched down the elements and had
painted them one by one, I would have used a lot less paint and maybe
even less time, but certainly the sketch itself would have taken some
extra time w.r.t. what I did. I also figured that if I had the chance to
'randomly select' only the non painted spots, I would have used the same
amount of paint as for the sketched solution, with the huge benefit of
not having to spend time in silly planning...

In conclusion I guess that a random approach has had some benefits over
a 'directed' approach since there was no need to list all the parts and
check them out, but a more 'intelligent' random selection would have
certainly been as efficient (in terms of amount of paint used) as the
directed case and as easy as the random one...

I'm not sure if I'm close to a paradigm shift in my verification
methodology (is I ever had one...), but certainly this discussion is
helping me a lot.

Hi everyone,

I'm not sure this is the best place to post this thread since is more
related to verification than vhdl itself, but since the OS-VVM is a vhdl
package I figured people here might also be interested in the topic
(please direct me elsewhere if needed).

I've followed the Aldec webinar on OS-VVM (see:
http://www.aldec.com/en/support/resources/multimedia/webinars) and found
it quite interesting, to the point that I'll definitely give OS-VVM a try.

I nevertheless have some doubts/comments about the cross coverage part.
The webinar presents an ALU design as a DUT to introduce the concept of
cross coverage of the two input registers in order to test all the
combinations, but why then not a two 'for loop' to generate all the cases:

<code>

-- ACov is variable of CovPType
for i in reg1'range loop
for j in reg2'range loop
DoAluOp(i, j); -- do transaction
ACov.ICover((reg1, reg2)); -- collect coverage
end loop;
end loop;

</code>

The webinar goes on on how you can optimize the coverage using what they
call 'intelligent coverage' which focuses on holes coverage.

Would the for loop above be less efficient than the intelligent cross
coverage proposed (in terms of simulation cycles)?

Moreover the 'intelligent coverage' is fulfilled with the call:

ACov.AddCross(2, GenBin(0,7), GenBin(0,7) );

where the first parameter is an 'AtLeast' parameter. Shouldn't it be
'AtMost'??? If you want each cross case to be hit not more than a
certain amount of time (to reduce the 'logN' factor of a randomly
generated pair of values) than I do not see why you have to specify an
'AtLeast' parameter... Am I missing something?

As a general comment, wouldn't it be more appropriate to find a better
suited example to show the real power of cross coverage [1]?

Any pointer to freely available code that illustrates the usage of
OS-VVM is extremely helpful.

Cheers,

Al

[1] I do not have any to provide and I'm willing to see what is the real
advantage about having the bins randomly filled instead of sequentially
filled with direct cases.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,576
Members
45,054
Latest member
LucyCarper

Latest Threads

Top