Scoreboard and Checker in Testbench?

D

Davy

Hi all,

IMHO, there is something compare the golden output and DUT output in
testbench (I call it Checker). But in verification book, there is both
Scoreboard and Checker. Are they similar?

Please recommend some reading on it.Thanks!

Best regards,
Davy
 
H

Hans

Hi Davy,

Sorry, can't answer your question since I don't know SystemVerilog and I use
Modelsim (VHDL+SystemC). I believe the examples are written in generic
SystemVerilog/SystemC so I would suggest to just try it out,

Hans.
www.ht-lab.com

Davy said:
Hi Davy,

The AVM cookbook from Mentor describes these terms, you can download a
free
copy from:

http://www.mentor.com/products/fv/_3b715c/cb_dll.cfm
[snip]
Hi Hans,

Can I use base class mentioned in Mentor's AVM cookbook in other
simulator like Cadence's NCSim?

Best regards,
Davy
 
A

Andy

Generally speaking, a scoreboard is a mechanism for tracking which
tests and which requirements have been run/verified. For example, a
testbench may have multiple tests which verify multiple requirements
(not often in 1:1 correspondence either). A portion of the overall
testbench is to capture and record the success/failure of each test,
and/or requirement verified. This is the purpose of the scoreboard.

Another aspect of good verification is to segregate interface
specification verification from performance specification verification.
For example you could create a protocol checker that monitors an
interface and verifies that each transaction on that interface is per
the protocol on the interface. But to keep it portable, that protocol
checker would not verify specific contents of transactions, that is
done elsewhere (so the content checker can be portable across different
interfaces too).

Andy
 
D

dtsi.india

Davy said:
Hi all,

IMHO, there is something compare the golden output and DUT output in
testbench (I call it Checker). But in verification book, there is both
Scoreboard and Checker. Are they similar?

Please recommend some reading on it.Thanks!

Best regards,
Davy
Davy,
A checker is used to check whether a given transaction has taken place
correctly . This may include data correctness and correct signalling
order.
A Scoreboard is used to keep track of how many transactions were
initiated, how many finished and how many are pending and wether a
given transaction passed or failed.
Regards
 
A

Alex

Davy said:
Hi all,

IMHO, there is something compare the golden output and DUT output in
testbench (I call it Checker). But in verification book, there is both
Scoreboard and Checker. Are they similar?

Please recommend some reading on it.Thanks!

Best regards,
Davy

IMO, There is lack for clear definition of verification terms. For me,
both of them are verification components which check specific design
properties for all simulation times and all testcases. They correspond
to the "safety" properties in formal verification: during system
execution, something wrong will never happen. IN AVM Cookbook, there is
definition for the scoreboard as a transaction-level checking
component. There are no defition of checker in the cookbook, yet there
is an example of so-called "Assertion-based checker". From this
example, we can find out that assertion-based checker checks
lower-level interface properties. Also, it has to be implemented using
assertions, but implementation details are not important here.

So, to find the differences between scoreboard and checker we have to
understand the meaning of transaction:
"The transaction is quantum of activity that occurs in design bounded
by time"
"A transaction is a single transfer of control or data between 2
entities"
"A transaction is a function call"

First definition is too broad. By this definition, scoreboard is the
same as assertion-based checker, since both of them deal with quantums
of design activity bounded by time.
Second definition is different from the first one and too restrictive.
What does it mean "single transfer"? Why between only two entities?
Third definition is the one I cannot understand. Does it mean that
"transaction-based" component must contain functions?

Regards,
-Alex
 
N

NigelE

Alex said:
So, to find the differences between scoreboard and checker we have to
understand the meaning of transaction:
"The transaction is quantum of activity that occurs in design bounded
by time"
"A transaction is a single transfer of control or data between 2
entities"
"A transaction is a function call"

First definition is too broad. By this definition, scoreboard is the
same as assertion-based checker, since both of them deal with quantums
of design activity bounded by time.
Second definition is different from the first one and too restrictive.
What does it mean "single transfer"? Why between only two entities?
Third definition is the one I cannot understand. Does it mean that
"transaction-based" component must contain functions?

Regards,
-Alex

Alex, I think you need to be careful not to confuse the generic use of
terms like entities (things) in the AVM cookbook with VHDL key words.

I agree it is difficult to tie down a definition of a transaction, as
transactions can be different things to different people
eg a h/w designer may want to identify individual transfers of data on
a bus as a transaction, while someone viewing the design at a higher
level is more interested in frames or packets, while at a system level
it may be a message between s/w on different processors.
All are valid transactions.

In terms of the AVM, the basic TLM communication mechanism is based on
the semantics of the OSCI TLM, implemented in SystemC and/or
SystemVerilog.

This defines the use of put(), get() and peek() function/task calls to
transfer a transaction from one component to another.

Both these languages support a component calling the tasks/functions of
another component (using classes and/or interfaces). Thus my monitor
can call the write() function of my scoreboard without needing to what
it does. This allows me to change the scoreboard without effecting the
monitor, provided the new scoreboard also implements a function called
write().
This is the basis for verification component reuse in the AVM (and
other transaction based verification methodologies.)

So in the AVM, a transaction is most commonly just a function/task call
between verification components, thus the third definition.

I hope this help clarify things

- Nigel
 
A

Alex

Hi, Nigel.
Thanks for clarifications.
Please see my comments below.
Alex, I think you need to be careful not to confuse the generic use of
terms like entities (things) in the AVM cookbook with VHDL key words.

This may be not a problem : I am pure Verilog coder ;)
I agree it is difficult to tie down a definition of a transaction, as
transactions can be different things to different people
eg a h/w designer may want to identify individual transfers of data on
a bus as a transaction, while someone viewing the design at a higher
level is more interested in frames or packets, while at a system level
it may be a message between s/w on different processors.
All are valid transactions.

That's sound like a problem. Transaction definition is too broad to be
useful. To clarify things, different companies developed their own
understanding of transaction. (see link to the article below).
In terms of the AVM, the basic TLM communication mechanism is based on
the semantics of the OSCI TLM, implemented in SystemC and/or
SystemVerilog.

OSCI definition of transaction also seems too broad to be useful:

"OSCI seems to have the most liberal interpretation. OSCI includes
several levels of abstraction under TLM, including Programmer's View
(PV), which contains no timing; Programmer's View with Timing (PVT),
which adds timed protocols and can analyze latency or throughput; and
Cycle Accurate, which is accurate to the clock edge but does not model
internal registers.
From an OSCI point of view, pretty much everything above RTL can be
considered TLM, said Pat Sheridan, OSCI executive director and the
director of marketing at CoWare Inc. But the way most users think of
TLM appears to include just the untimed (PV) and cycle-approximate
(PVT) styles of modeling."

http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=181503693&printable=true

This defines the use of put(), get() and peek() function/task calls to
transfer a transaction from one component to another.

Which means that it is possible to use these convenient functions for
transaction communication. But, it is also possible to connect bus
master with the bus slave - they will also communicate with
transactions through the signal-level interface. And this is
transaction-level modeling too.
Both these languages support a component calling the tasks/functions of
another component (using classes and/or interfaces). Thus my monitor
can call the write() function of my scoreboard without needing to what
it does. This allows me to change the scoreboard without effecting the
monitor, provided the new scoreboard also implements a function called
write().

There may be different methods for transaction communication for reuse.
For example, there are strong arguments against AVM method, where
monitor code has to be modified in order to communicate with specific
checking component. Since monitor is protocol-specific
(design-independent), it is usualy more reusable then design-specific
checker/scoreboard. Then, it may be a better idea for monitor to simply
"present" transaction without calling some external functions. Then, it
will be the task of checking component to grab needed data from needed
monitors when they are signalling about "transaction completion". This
make monitors to be independent from the "external world" and thus
highly reusable between the projects.

In any case, functions/tasks presented by OSCI do not clarify the
meaning of transaction, but rather provide some implementational
details.
This is the basis for verification component reuse in the AVM (and
other transaction based verification methodologies.)

So in the AVM, a transaction is most commonly just a function/task call
between verification components, thus the third definition.

I agree. In this case, AVM may contain only third transaction
definition. This will greatly clarify the meaning of transaction in the
context of AVM.

Regards,
-Alex
 
N

NigelE

Alex said:
Hi, Nigel.
Thanks for clarifications.
Please see my comments below.


This may be not a problem : I am pure Verilog coder ;)


That's sound like a problem. Transaction definition is too broad to be
useful. To clarify things, different companies developed their own
understanding of transaction. (see link to the article below).


OSCI definition of transaction also seems too broad to be useful:

"OSCI seems to have the most liberal interpretation. OSCI includes
several levels of abstraction under TLM, including Programmer's View
(PV), which contains no timing; Programmer's View with Timing (PVT),
which adds timed protocols and can analyze latency or throughput; and
Cycle Accurate, which is accurate to the clock edge but does not model
internal registers.

considered TLM, said Pat Sheridan, OSCI executive director and the
director of marketing at CoWare Inc. But the way most users think of
TLM appears to include just the untimed (PV) and cycle-approximate
(PVT) styles of modeling."

http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=181503693&printable=true



Which means that it is possible to use these convenient functions for
transaction communication. But, it is also possible to connect bus
master with the bus slave - they will also communicate with
transactions through the signal-level interface. And this is
transaction-level modeling too.


There may be different methods for transaction communication for reuse.
For example, there are strong arguments against AVM method, where
monitor code has to be modified in order to communicate with specific
checking component. Since monitor is protocol-specific
(design-independent), it is usualy more reusable then design-specific
checker/scoreboard. Then, it may be a better idea for monitor to simply
"present" transaction without calling some external functions. Then, it
will be the task of checking component to grab needed data from needed
monitors when they are signalling about "transaction completion". This
make monitors to be independent from the "external world" and thus
highly reusable between the projects.

In any case, functions/tasks presented by OSCI do not clarify the
meaning of transaction, but rather provide some implementational
details.


I agree. In this case, AVM may contain only third transaction
definition. This will greatly clarify the meaning of transaction in the
context of AVM.

Regards,
-Alex

I actually think the broad scope of TLM is one of its key strengths.
Being able to apply the same techniques across a wide range of problems
means the same methodology can be used at multiple abstraction levels.
Transactors are used to communicate across these abstraction boundaries
(not just the TLM<>signal boundary) enabling modelling to occur at the
highest abstraction level appropriate resulting in smaller/faster code.

A quick point on AVM monitors.
The AVM provides a capability called an analysis port (which is an
implementation of the OO design observer design pattern).

An analysis port provides a mechanism for a monitor to broadcast
transactions to zero or more subscribers (listeners) via a single
write() function call.

The subscribers need to implement a write() function, each of which can
do different things (eg record functional coverage, update a scoreboard
etc). Thus a monitor does not need to know in what context it will be
used as it is only in the top level environment of the simulation where
analysis components are registered onto a particular analysis port.

This makes AVM monitors extremely generic, being able to be used in
module, chip and system verification and across multiple designs using
the same interface.

Regards
- Nigel
 
A

Alex

NigelE said:
I actually think the broad scope of TLM is one of its key strengths.
Being able to apply the same techniques across a wide range of problems
means the same methodology can be used at multiple abstraction levels.
Transactors are used to communicate across these abstraction boundaries
(not just the TLM<>signal boundary) enabling modelling to occur at the
highest abstraction level appropriate resulting in smaller/faster code.

Sorry, Nigel, I don't agree with this.
Broad definitions lead to ambiguity. Everything which is not formalized
is ambiguis. Ambiguity of the basic terms leads to ambiguity of the
whole system, based on these terms.

We all know how difficult is to work with ambiguis specifications. The
problem is that we believe that we understand correctly the meaning of
some term, and don't do any efforts to clarify things further. Then, we
spend significant amount of time and energy in wrong direction.

As verification engineers, we have to formalize design specification
documents in verification plans, thus removing ambiguity from them.
However, we have ambiguity in our own definitions, which leads to many
communication problems. Transaction definition is one example. Here is
another one: what is the difference between assertions, assertion
monitors, asertion-based checkers, non-assertion-based checkers and
scoreboards? All of them validate the correctness od some design
properties during all simulation time and independently of stimulus.
A quick point on AVM monitors.
The AVM provides a capability called an analysis port (which is an
implementation of the OO design observer design pattern).

An analysis port provides a mechanism for a monitor to broadcast
transactions to zero or more subscribers (listeners) via a single
write() function call.

The subscribers need to implement a write() function, each of which can
do different things (eg record functional coverage, update a scoreboard
etc). Thus a monitor does not need to know in what context it will be
used as it is only in the top level environment of the simulation where
analysis components are registered onto a particular analysis port.

This makes AVM monitors extremely generic, being able to be used in
module, chip and system verification and across multiple designs using
the same interface.

Thanks for clarification. In this case, the monitors are not
"transaction senders" but rather "transaction presenters", which really
make them highly reusable. I am also using similar concept to build
Verilog testbenches for more than 4 years.

Regards,
-Alex
 
A

Alex

NigelE said:
I actually think the broad scope of TLM is one of its key strengths.
Being able to apply the same techniques across a wide range of problems
means the same methodology can be used at multiple abstraction levels.
Transactors are used to communicate across these abstraction boundaries
(not just the TLM<>signal boundary) enabling modelling to occur at the
highest abstraction level appropriate resulting in smaller/faster code.

Sorry, Nigel, I don't agree with this.
Broad definitions lead to ambiguity. Everything which is not formalized
is ambiguis. Ambiguity of the basic terms leads to ambiguity of the
whole system, based on these terms.

We all know how difficult is to work with ambiguis specifications. The
problem is that we believe that we understand correctly the meaning of
some term, and don't do any efforts to clarify things further. Then, we
spend significant amount of time and energy in wrong direction.

As verification engineers, we have to formalize design specification
documents in verification plans, thus removing ambiguity from them.
However, we have ambiguity in our own definitions, which leads to many
communication problems. Transaction definition is one example. Here is
another one: what is the difference between assertions, assertion
monitors, asertion-based checkers, non-assertion-based checkers and
scoreboards? All of them validate the correctness od some design
properties during all simulation time and independently of stimulus.
A quick point on AVM monitors.
The AVM provides a capability called an analysis port (which is an
implementation of the OO design observer design pattern).

An analysis port provides a mechanism for a monitor to broadcast
transactions to zero or more subscribers (listeners) via a single
write() function call.

The subscribers need to implement a write() function, each of which can
do different things (eg record functional coverage, update a scoreboard
etc). Thus a monitor does not need to know in what context it will be
used as it is only in the top level environment of the simulation where
analysis components are registered onto a particular analysis port.

This makes AVM monitors extremely generic, being able to be used in
module, chip and system verification and across multiple designs using
the same interface.

Thanks for clarification. In this case, the monitors are not
"transaction senders" but rather "transaction presenters", which really
make them highly reusable. I am also using similar concept to build
Verilog testbenches.

Regards,
-Alex
 
N

NigelE

Alex said:
Sorry, Nigel, I don't agree with this.
Broad definitions lead to ambiguity. Everything which is not formalized
is ambiguis. Ambiguity of the basic terms leads to ambiguity of the
whole system, based on these terms.

We all know how difficult is to work with ambiguis specifications. The
problem is that we believe that we understand correctly the meaning of
some term, and don't do any efforts to clarify things further. Then, we
spend significant amount of time and energy in wrong direction.

As verification engineers, we have to formalize design specification
documents in verification plans, thus removing ambiguity from them.
However, we have ambiguity in our own definitions, which leads to many
communication problems. Transaction definition is one example. Here is
another one: what is the difference between assertions, assertion
monitors, asertion-based checkers, non-assertion-based checkers and
scoreboards? All of them validate the correctness od some design
properties during all simulation time and independently of stimulus.


Thanks for clarification. In this case, the monitors are not
"transaction senders" but rather "transaction presenters", which really
make them highly reusable. I am also using similar concept to build
Verilog testbenches.

Regards,
-Alex

Alex,

I think this definition issue depends on the perspective you are
looking from ;)

I agree that within a verification team, you should have a clear,
common understanding of what each component should and shouldn't do.
However, what may be suitable for one team may be too loose OR too
restrictive for another team. Thus a methodology that can accommodate
most user requirements is one that can be broadly adopted. However
that doesn't preclude it being used in a tightly defined manner.

I guess where we differ is who should be responsible for defining these
precise definitions - the user or the tool/methodology provider ?

Regards

- Nigel
 
A

Alex

Alex,
I think this definition issue depends on the perspective you are
looking from ;)

I agree that within a verification team, you should have a clear,
common understanding of what each component should and shouldn't do.
However, what may be suitable for one team may be too loose OR too
restrictive for another team. Thus a methodology that can accommodate
most user requirements is one that can be broadly adopted. However
that doesn't preclude it being used in a tightly defined manner.

In this context, I am not speaking about "what each component should
and shouldn't do". I am speaking about terminology and basic
definitions. The whole industry will betefit from speaking the same
language. Imagine, for example, job interview question: "What is
scoreboard?". Or : "What does transaction mean?". The applicant is
lucky, if his/her understanding of scoreboard/transaction matches the
understanding of the interviewer.

I guess where we differ is who should be responsible for defining these
precise definitions - the user or the tool/methodology provider ?

For me, there is only one answer: methodology provider. Methodiology is
a building, and definitions/concepts are the building blocks. If the
shape of building block changes (at least, slightly), the shape of the
whole buiding changes significantly. Then, user definitions are not
always complementary - they can also contradict. The attempt of "being
nice to everyone by making loose definitions" may reduce the technical
value of the whole methodology, developed by EDA company.

Regards,
-Alex
 
D

Don

Davy said:
Hi all,

IMHO, there is something compare the golden output and DUT output in
testbench (I call it Checker). But in verification book, there is both
Scoreboard and Checker. Are they similar?

Scoreboard is mainly used to check the functional property correctness
of a DUT.
eg. whether a certain functional algorithm of the DUT works correctly.

Checker is used to verify the protocol on the DUT interface output. ie.
whether
the timing of certain transactions are correct or not. Mainly it is
used to verify
whether the output signals of the DUT confiorm to a specific protocol
eg. Ambd AXI.

Hope this is a clear definition.

I would have checkers in my testbench to flag an error on protocol
error before
the scoreboard checks the actual output of the DUT.

Don
 
Joined
Jul 10, 2007
Messages
11
Reaction score
0
Hi all,

IMHO, there is something compare the golden output and DUT output in
testbench (I call it Checker). But in verification book, there is both
Scoreboard and Checker. Are they similar?

Please recommend some reading on it.Thanks!

Best regards,
Davy

"To verify the IP filter a reference model has to be build. Due to the DUT specific memory model, which allows any memory size (first packets to arrive are stored and served), the exact timing of DUT analysis is hardly predictable. Cycle accurate verification models are not good practice anyways.
Therefore an easy way to implement the reference model is to use some sort of lists...."

===p://bknpk.no-ip.biz/my_web/SDIO/ip_ttl_filter_vhdl_scbd.html
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,535
Members
45,008
Latest member
obedient dusk

Latest Threads

Top