millions combinations of test vectors for ALU

L

Lily

Hello,

I need to verify my ALU. I have generate the input text file which
consits of 20 bits as follows :
4 bit - opcode
8 bit - operandA
8 bit - operandB

what is the best way to read the input text file which consist of 2 to
the power of 20 combinational test cases. please give example in vhdl.
can i just read the test vectors by each line as code below

file infile : text is in "input.txt";
variable aluinput : std_logic_vector(19 downto 0);
variable buff : line;

begin -- process
if rst = '0' then
A <= "00000000";
B <= "00000000";
elsif clock'event and clock = '1' then
if not (endfile(infile)) then
readline(infile,buff);
read(buff,aluinput);

A <= aluinput(7 downto 0);
B <= aluinput(15 downto 8);
S <= aluinput(19 downto 16);

end if;
end if;

is this the efficient way or are there any other alternatives ?
does anybody out there know how to read the input text file in C model
?

thanks in advance. lily.
 
V

VhdlCohen

I need to verify my ALU. I have generate the input text file which
consits of 20 bits as follows :
4 bit - opcode
8 bit - operandA
8 bit - operandB

what is the best way to read the input text file which consist of 2 to
the power of 20 combinational test cases. please give example in vhdl.
can i just read the test vectors by each line as code below

You obviously did not generate those million test vectors manually, otherwise
it's a lot of typing!!! In addition, reading and textio processing of of 1
million vectors takes extra time.

You used an algorithm. I suggest instead that you use the same algorithm, or
an LFSR (available at my site) to generate those vectors, instead of using a
file.
example:
process (clk, reset)
variable lfsrv : std_logic_vector(19 downto 0) := (others => '0');
begin
if reset then
...
else
A <= lfsrv(19 downto 16);
B <= lfsrv(15 downto 8);
C <= lfsrv(7 downto 0);
lfsrv := lfsr(lfsrv); -- new pseudo random value
end if;
end process;
-----------------------------------------------------------------------------
Ben Cohen Trainer, Consultant, Publisher (310) 721-4830
http://www.vhdlcohen.com/ (e-mail address removed)
Author of following textbooks:
* Using PSL/SUGAR for Formal and Dynamic Verification 2nd Edition, 2004 isbn
0-9705394-6-0
* Real Chip Design and Verification Using Verilog and VHDL, 2002 isbn
0-9705394-2-8
* Component Design by Example ", 2001 isbn 0-9705394-0-1
* VHDL Coding Styles and Methodologies, 2nd Edition, 1999 isbn 0-7923-8474-1
* VHDL Answers to Frequently Asked Questions, 2nd Edition, isbn 0-7923-8115
------------------------------------------------------------------------------
 
A

Ajeetha Kumari

Hi,

[email protected] (VhdlCohen) wrote in message news: said:
You used an algorithm. I suggest instead that you use the same algorithm, or
an LFSR (available at my site) to generate those vectors, instead of using a
file.

I would second that idea of pseudo-random vectors and would add that
"use functional coverage monitors" to make sure you cover some of the
interesting values/corner cases such as 0, max_value, max/2,
all_bit_toggle and some "buckets" to cover the rest. Depending on the
implementation, one may think of more corner cases.

And just to add on, one could use PSL to specify those coverage
points, nowadays many simulators support PSL such as NCSIM, Modelsim
etc.

HTH,
Ajeetha
http://www.noveldv.com
Co-Author: Using PSL/SUGAR for Formal and Dynamic Verification 2nd
Edition, 2004
 
V

VhdlCohen

One more thing!
Millions of vectors may not finish!
Formal verification with an assertion would validate the functional
intent, but not the gate level implementation.
Simulation with an Assertion-Based Verification methodology
(e.g., PSL, SVA) will provide functional coverage, as Ajeetha mentioned. .
Equivalence checking (RTL/gate) after Assertion-Based Verification can verify
that
gate level implemention is equivalent to RTL.
-----------------------------------------------------------------------------
Ben Cohen Trainer, Consultant, Publisher (310) 721-4830
http://www.vhdlcohen.com/ (e-mail address removed)
Author of following textbooks:
* Using PSL/SUGAR for Formal and Dynamic Verification 2nd Edition, 2004 isbn
0-9705394-6-0
* Real Chip Design and Verification Using Verilog and VHDL, 2002 isbn
0-9705394-2-8
* Component Design by Example ", 2001 isbn 0-9705394-0-1
* VHDL Coding Styles and Methodologies, 2nd Edition, 1999 isbn 0-7923-8474-1
* VHDL Answers to Frequently Asked Questions, 2nd Edition, isbn 0-7923-8115
------------------------------------------------------------------------------
 
J

Jim Lewis

First, take care as millions of operations may not finish,
but 2**20 should be do able.

As Ben mentioned, you may wish to code you algorithm in VHDL
and not read from a file.
The problem with reading files is that unless the simulators
IO buffering is good, the simulator has to stop and wait for
the OS to provide the data.

I am not sure I buy into going Random initially. I would
probably apply a deterministic algorithm initially, however,
what you do depends on what you are testing. Tests for
an adder are different from those of a multiplier. Also
what you can do varies with the word width. As word width
gets bigger, the importance of an algorithm increases.

Are there any dependencies between consecutive opcodes?
This is where random might help cover cases that you did
not think of. Otherwise, I would only run random vectors
on an ALU if time permitted.

Cheers,
Jim
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Jim Lewis
Director of Training mailto:[email protected]
SynthWorks Design Inc. http://www.SynthWorks.com
1-503-590-4787

Expert VHDL Training for Hardware Design and Verification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
M

Marcus Harnisch

Hi Ben and others,

You used an algorithm. I suggest instead that you use the same algorithm, or
an LFSR (available at my site) to generate those vectors, instead of using a
file.

Can anyone point out the advantage of using pseudo random vectors in
*this particular case*?

As I see it, the only benefit would be a better chance of all corner
cases being hit earlier than by just walking through the vectors. But
what are the corner cases anyway? If the ALU is purely combinational,
all input vectors should always generate a corresponding output vector
that is either right or wrong.

Besides, if the ALU is stateless, the input vector sequence doesn't
have any influence on the output. Applying a random sequence of
vectors is in no way different from applying any other sequence.

Third, what is PSL (or any other assertion language) supposed to help
with?

Did I miss anything? Yes, Jim's message ;-)

-- Marcus
 
K

Kai Harrekilde-Petersen

Marcus Harnisch said:
Hi Ben and others,



Can anyone point out the advantage of using pseudo random vectors in
*this particular case*?

As I see it, the only benefit would be a better chance of all corner
cases being hit earlier than by just walking through the vectors. But
what are the corner cases anyway? If the ALU is purely combinational,
all input vectors should always generate a corresponding output vector
that is either right or wrong.

Of course. All lossless computations are deterministic (lossy
compression isn't, but I can't think of a lossless computation that
doesn't include a true random input, which is nondeterministic).

In you case, I'd use the knowledge of the highly regular ALU
structure, ie: there are known pathological cases, such as ~0
+ 1 doing a full carry, etc.

Also, if the ALU is designed as a smaller fundamental structure which
is replicated (not uncommon), why not test the fundamental block
first, and then add a few test vectors to ensure that the
interconnection is done correctly?

If you have 8 operations (AND, OR, XOR, NOT, NOR, NAND, add,
subtract), an A and a B input, plus a carry-in, you'd be covered with
64 vectors. And even here, some of the vectors will be redundant.

Besides, if the ALU is stateless, the input vector sequence doesn't
have any influence on the output. Applying a random sequence of
vectors is in no way different from applying any other sequence.

No, but using a LFSR is one hell of an effective way of generating
those vectors automatically and efficiently.
Third, what is PSL (or any other assertion language) supposed to help
with?

Using assertions is a very powerful way of testing things. Assertions
can be used (together with a minimal FSM logic) used to monitor a bus
protocol, or guarantee that some property is always valid. Using
assertions will usually result in the error getting detected sooner
than later in a simulation, and thereby reducing the bug tracking
time.

Regards,


Kai
 
V

VhdlCohen

Third, what is PSL (or any other assertion language) supposed to help
Using assertions is a very powerful way of testing things. Assertions
can be used (together with a minimal FSM logic) used to monitor a bus
protocol, or guarantee that some property is always valid. Using
assertions will usually result in the error getting detected sooner
than later in a simulation, and thereby reducing the bug tracking
time.

With PSL, you can write "end-to-end" assertions about what you expect the
functional operations of the design should be. For example:
-- If IR = ADD(mem(ir_addr), regfile(regid)) and results is stored in memory,
one could write:
property ADD_MEM is
always {IR=ADD_MEM} |=>
{*3); mem(prev(ir_addr, 3)) = prev(mem(prev(ir_addr,3))) +
regfile(prev(regid, 3));
-- this states that if the current instruction reg (IR) then 3 cycles later
the following is expected:
memory $ addr set in the previous 3 cycles =
value of memory in the previous 3 cycles + regfile @ regid @ previous 3
cycles.
I computed the cycles as follows:
1. data fetch from memory
2. add of fetched mem data + regfile data
3. store of add results back to memory.
Note that the PSL is not showing the FSM, or the individual cycles in between
the instruction and the results. PSL in that statement shows what is expected
as a result of the instruction.
An error in the RTL implementation will be detected, provided data is not all
zeros all the time (need some randomness).
Of course, PSL is explained in our book.
-----------------------------------------------------------------------------
Ben Cohen Trainer, Consultant, Publisher (310) 721-4830
http://www.vhdlcohen.com/ (e-mail address removed)
Author of following textbooks:
* Using PSL/SUGAR for Formal and Dynamic Verification 2nd Edition, 2004 isbn
0-9705394-6-0
* Real Chip Design and Verification Using Verilog and VHDL, 2002 isbn
0-9705394-2-8
* Component Design by Example ", 2001 isbn 0-9705394-0-1
* VHDL Coding Styles and Methodologies, 2nd Edition, 1999 isbn 0-7923-8474-1
* VHDL Answers to Frequently Asked Questions, 2nd Edition, isbn 0-7923-8115
------------------------------------------------------------------------------
 
M

Marcus Harnisch

Kai,

Kai Harrekilde-Petersen said:
Of course. All lossless computations are deterministic (lossy
compression isn't, but I can't think of a lossless computation that
doesn't include a true random input, which is nondeterministic).

How does determinism come into play here? Every algorithm is
deterministic by definition, no?
In you case, I'd use the knowledge of the highly regular ALU
structure, ie: there are known pathological cases, such as ~0
+ 1 doing a full carry, etc.

Why is a full carry more important than others? I don't see a way to
avoid an exhaustive test. Consider the pathologic case that the ALU is
implemented as lookup table. There is no propagation here. Just input
and output vectors.
Also, if the ALU is designed as a smaller fundamental structure which
is replicated (not uncommon), why not test the fundamental block
first, and then add a few test vectors to ensure that the
interconnection is done correctly?

Yes -- if you can make that assumption.
No, but using a LFSR is one hell of an effective way of generating
those vectors automatically and efficiently.

You mean as opposed to walking through the numbers by incrementing
them? What is the benefit of using an LFSR here?
Using assertions is a very powerful way of testing things. Assertions
can be used (together with a minimal FSM logic) used to monitor a bus
protocol, or guarantee that some property is always valid. Using
assertions will usually result in the error getting detected sooner
than later in a simulation, and thereby reducing the bug tracking
time.

I think we are in violent agreement here. Being a frequent user of
assertions myself I am not arguing the benefits of assertion languages
in general.

The only thing I am wondering about is the added value of an assertion
over "traditional" methods to verify an ALU

case (S)
-- Well, this is an assertion in some sense...
when ADD => assert (A + B = aluoutput)
report "Mismatch!" severity error;
when ...
end case;

If I could afford a tool that uses PSL assertions to do some sort of
formal checking, I'd probably shell out the money for pre-verified ALU
IP ;-)

Best regards,
Marcus
 
V

VhdlCohen

The only thing I am wondering about is the added value of an assertion
over "traditional" methods to verify an ALU

See my previous post, but the simple answer is that assertion languages, like
PSL or SVA, allow you to do an end-end assertion using an abstract, readable
language.
You can use RTL assertions, like OVL. In any case, the concept of
Assertion-Based Verification is good.
PSL is also good for documention of requirements, architectural plans,
verification plans, and verification thru simulation and in many cases formal
verification.
-----------------------------------------------------------------------------
Ben Cohen Trainer, Consultant, Publisher (310) 721-4830
http://www.vhdlcohen.com/ (e-mail address removed)
Author of following textbooks:
* Using PSL/SUGAR for Formal and Dynamic Verification 2nd Edition, 2004 isbn
0-9705394-6-0
* Real Chip Design and Verification Using Verilog and VHDL, 2002 isbn
0-9705394-2-8
* Component Design by Example ", 2001 isbn 0-9705394-0-1
* VHDL Coding Styles and Methodologies, 2nd Edition, 1999 isbn 0-7923-8474-1
* VHDL Answers to Frequently Asked Questions, 2nd Edition, isbn 0-7923-8115
------------------------------------------------------------------------------
 
K

Kai Harrekilde-Petersen

Marcus Harnisch said:
Kai,



How does determinism come into play here? Every algorithm is
deterministic by definition, no?

No. Lossy compression algorithms (e.g. JPEG, MPEG, MP3) are *NOT*
necessarily deterministic in what they throw away. Two compliant
implementations may choose to discard different infortmation.
Why is a full carry more important than others?

Because it tests that all the full adders can correctly do a carry at
the same time. Hence, you don't need to test the carry chain
again. The morale is that _Not All Test Vectors Are Created Equal_.

Some test vectors add very little coverage; others add a lot of coverage.
I don't see a way to
avoid an exhaustive test. Consider the pathologic case that the ALU is
implemented as lookup table.

That makes no diffence. The basic functionality is the same; 3 bit
opcode, A input, B input, and Carry_in input.

But if you change to, say, an 8 bit wide implementation (e.g. using
CLA logic), things change.

Look, why don't you look up some text books about testing and test
vector generation? I seems to me that you aren't very familiar with
the basics here.
Yes -- if you can make that assumption.

Only you can decide that - you're the one implementing it, right?
You mean as opposed to walking through the numbers by incrementing
them? What is the benefit of using an LFSR here?

An LFSR costs much less logic that an incrementer. For some HW, that's
important.

Regards,


Kai
 
M

Marcus Harnisch

Kai,

Kai Harrekilde-Petersen said:
No. Lossy compression algorithms (e.g. JPEG, MPEG, MP3) are *NOT*
necessarily deterministic in what they throw away. Two compliant
implementations may choose to discard different infortmation.

I think we are not on the same page here. These standards allow some
variation between different implementations. Each implemented
algorithm however is deterministic. It is indeed non-deterministic
which implementation was chosen in a particular case.

But don't let this carry us away from the OP's ALU issue.
That makes no diffence. The basic functionality is the same; 3 bit
opcode, A input, B input, and Carry_in input.

The functionality is the same. But in a LUT each entry has an equal
probability of being wrong. There is no way to find a single test
vector that will verify more than a single LUT entry. Thus you need
the full set of vectors.
Only you can decide that - you're the one implementing it, right?

Wrong! I am the one *verifying* it. Once I put my verification hat on,
I could care less about the implementation. It all depends at which
granularity you want to consider something a black-box.

You are making assumptions about the internal structure of the
ALU. That maybe completely valid in a given scenario. But if someone
approaches me asking me to verify an ALU, I don't necessarily know
these internals.
An LFSR costs much less logic that an incrementer. For some HW, that's
important.

Verification is typically done in software though. I bet an addition
costs less processor cycles than all the logical operations in an
LFSR. But that's not even my point. My point is that it is (currently)
more effort to use an LSFR from within VHDL than using an incrementer.
I don't see any technical advantage in using random vectors to verify
a stateless circuit.

The effort of using random vectors approaches zero if you have the
chance to use one of the popular HVLs with built-in PRNGs (in which
case you might also consider switching to some sort of assertion
language which often comes with an HVL).

Best regards,
Marcus
 
B

Bob Jenkins

Marcus Harnisch said:
Verification is typically done in software though. I bet an addition
costs less processor cycles than all the logical operations in an
LFSR. But that's not even my point. My point is that it is (currently)
more effort to use an LSFR from within VHDL than using an incrementer.
I don't see any technical advantage in using random vectors to verify
a stateless circuit.

The effort of using random vectors approaches zero if you have the
chance to use one of the popular HVLs with built-in PRNGs (in which
case you might also consider switching to some sort of assertion
language which often comes with an HVL).

Best regards,
Marcus

One possible advantage of an LFSR over incrementing a counter is that
if there is a bug affecting a m out of n states, but those states are
clumped together, an LFSR is likely to hit one of those in the first
n/m states that it tries, but a counter is likely to go through n/2 of
the states before it hits any bad ones. A PRNG shares this advantage.
This only matters if it takes a long time to try all states.

One possible advantage of an LFSR over a PRNG is that an LFSR won't
repeat any states. If you want to cover all n states, an LFSR will do
it in n tries, but a PRNG has to try nlog(n) times before it probably
hit every state, and even then it's only "probably". A counter shares
this advantage.
 
M

Marcus Harnisch

Hi Bob,

One possible advantage of an LFSR over incrementing a counter is that
if there is a bug affecting a m out of n states, but those states are
clumped together, an LFSR is likely to hit one of those in the first
n/m states that it tries, but a counter is likely to go through n/2 of
the states before it hits any bad ones. A PRNG shares this advantage.
This only matters if it takes a long time to try all states.

But until after you actually tried all vectors, there is no way to
tell you are done. How about these error cases happened to be clumped
together so that they'd be caught by the incrementer -- or by a
different LFSR polynomial? Equal chances, I'd say.

The sequence generated by an LFSR shares some important
characteristics with a counter:

1. The sequence is deterministic

2. It has a period of 2^n cycles

3. Each generated value is unique

You could probably say that the sequence generated by a counter is a
special case of an LFSR sequence.
One possible advantage of an LFSR over a PRNG is that an LFSR won't
repeat any states. If you want to cover all n states, an LFSR will do
it in n tries, but a PRNG has to try nlog(n) times before it probably
hit every state, and even then it's only "probably". A counter shares
this advantage.

Good point! That is my standard argument if certain vendors (the usual
suspects) try telling me that there is nothing but random simulation
(plus coverage).

AFAIK, Testbuilder (and likely SystemC) provide a PRNG that guarantees
that each generated number is unique. Other HVLs could be prevented
from running forever, e.g. by using functional coverage data as
feedback for the stimulus. In Specman/e you could also generate a list
containing all numbers from 0 to 2^20-1 (after you changed the
max. list size). Then you would use the PRNG to generate a second list
with the constraint that both lists must have the same set of values
(there is always more than one way to do things). Which brings us back
to the argument whether this complexity is really necessary for our
ALU case.

2^20 input vectors generated by either a counter or an LFSR would have
long finished while we are still babbling :)

Best regards,
Marcus
 
K

Kai Harrekilde-Petersen

Marcus Harnisch said:
The sequence generated by an LFSR shares some important
characteristics with a counter:

1. The sequence is deterministic

2. It has a period of 2^n cycles

Erh, only if you go out of your way to make it that. "Standard"
maximum-length LFSR have a period of 2^n - 1 cycles (the zero vector
is normally not included).
Good point! That is my standard argument if certain vendors (the usual
suspects) try telling me that there is nothing but random simulation
(plus coverage).

At work, we use a combination of directed non-random tests that
targets known corner-cases, and the use of randomizing seeds to data
generators etc for almost everything. The motto goes: when in doubt,
make it random.


--Kai
 
M

Marcus Harnisch

Kai Harrekilde-Petersen said:
Erh, only if you go out of your way to make it that. "Standard"
maximum-length LFSR have a period of 2^n - 1 cycles (the zero vector
is normally not included).

That is true. So we'd need to make sure that special case is covered
separately.
The motto goes: when in doubt, make it random.

Correct. Start out with a random framework and maintain different sets
of constraints (down to directed application of certain vectors)
depending on the application.

Best regards,
Marcus
 
B

Bob Jenkins

Kai Harrekilde-Petersen said:
At work, we use a combination of directed non-random tests that
targets known corner-cases, and the use of randomizing seeds to data
generators etc for almost everything. The motto goes: when in doubt,
make it random.

Often there are a dozen or more independent variables. Bug
descriptions usually don't mention ten variables, usually two or three
will do. Testing all settings of all pairs or triples of variables
will catch most bugs.

A single test that sets n variables randomly will test (n choose 2)
pairs of variables and (n choose 3) triples of variables. The number
of tests you need to cover all settings of all pairs or all triples of
variables grows with the log of the number of variables. There are
tools for trying to minimize the number of tests for this (allpairs,
aetg, jenny). But random settings will also do the job in about the
log of the number of variables. So would an LFSR.

A counter, on the other hand, will cover all exponentially many
settings of earlier variables before it changes later variables. So
any bug in the last variable gets hit after exp(n-1) tests instead of
within log(n) tests. If n is big, the random approach scales nicely,
but the counter never finishes.

I suspect that that explains the strength of random tests.

The case at hand had 3 variables and a feasible number of states; the
counter approach is fine for that.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,149
Latest member
Vinay Kumar Nevatia0
Top