Conditional signal assignment or process statement

K

KJ

Ideally, your final test
should be a black box test, with a self checking testbench. It worries
me when I see designers stare at waveforms all day and using this for
their verification. They should be using output data as the test - no
waveforms needed.

Just to be clear, I'm not talking about using waveforms for
verification. What they are used for is investigation when the
verification assertion fails...after all, one does not intentionally
write bad code in the first place so the fact that an assertion has
failed implies there is a problem somewhere that needs investigating.

If the root cause of the problem (which you do not know a priori)
happens to have occurred at the same time as the assertion (which is a
symptom, not the problem itself), then one will have access to the
*current* state of all signals and variables. Maybe that's enough to
solve the problem, but many times one needs to see some history in
addition to the current state of signals and variables in order to
make the definitive diagnosis.

By Andy's own admission he does not solve the current problem but
typically can wait for the problem to occur again after the first time
- "just catch it again on the next time around"
- "and insert a few breakpoints and monitors if necessary" (implies
running the sim s'more)

Sometimes (maybe many times) that is sufficient. But it is just as
likely that whatever the original problem that caused the first
assertion, if you continue running the sim, has now caused a second
downstream problem that just makes it more difficult because the
symptoms of this secondary problem is less directly related to the
original problem.

Even the software guys write out a trace log file which captures
important (to them) information about events that happen. They don't
simply capture the current state of everything at the point of the
failure...the history of what leads up to the problem is generally the
key to solving the problem. The fact that this history gets stored in
a file that is viewable as signals in a sim tool and not as a list of
statement executions is not relevant and has nothing to do with
'source code debugging'. It means you're making use of the tools that
are available. Not using the history that is available to you is a
choice and has a cost, but that choice is not about 'source code
debugging' versus 'waveform debugging'

Kevin Jennings
 
R

rickman

I can see where you're coming from andy. Ideally, your final test
should be a black box test, with a self checking testbench. It worries
me when I see designers stare at waveforms all day and using this for
their verification. They should be using output data as the test - no
waveforms needed. Working in video I use bitmaps for input and output
data. Its so much easier looking at a whole picture than looking at a
stream of pixels. Often this output picture gives you a clue as to
whats wrong - its normally very obviously when something has gone
wrong doing this. Then I can get in amongst the waveform for more
specific debugging, using the clues from the output.

I agree that verification is best done by the code and not by viewing
waveforms. In fact, sometimes I think that my project is actually a
matter of debugging the test bench and the FPGA design being verified
is just a side effect!

The importance of proper verification is drummed into me every time I
don't do it. Just like the time they omitted an unnecessary test on
the Hubble Space Telescope.

Rick
 
R

rickman

That's much the same as the methods I use (and even the odd printf^H^H^H^H^H
report statement :)  

I'm sure Aldec can put variables in the wave window - it may be that (as with
Modelsim) you have to do it before the sim for it to log them.  At the subblock
level, adding a few variables and restarting is not usually a killer - especially
when you have asserts already in to stop the simulation as soon as thingsgo
wrong.

Please show me how to do this. I have added variables to the waveform
window until I was blue in the face and their values never show up.
It makes some sense. Variables are not defined outside of the
process, function or procedure and in many cases do not retain a value
between invocations. So what would be displayed?

I add variables by selecting them in the source file, right clicking
and selecting "Add to waveform". But I can do this with anything in
the design including VHDL keywords, so being able to add it to the
waveform display doesn't mean anything.

If I set a breakpoint and single step, I can see the variables in the
Call Stack window and watch them change. But they never show up in
the waveform. That further makes sense since a variable can change
value several times with no time ticking off. How would you display
that in a time based waveform?

(The variables I want to see are usually state variables - it'd be great
to be able to do "log -r *state" on variables :)


And there are times when I'm writing embedded software that I'd really like a
waveform trace of my C variables :)

Too bad they don't have signals in C... Sometimes I turn variables
into signals by outputting them on pins which can be displayed as
waveforms by a logic analyzer. ;^)

Rick
 
R

rickman

OK, let's keep separate things separate.

The bit that got my dander up was your implied, but
clear, statement that fewer lines of code makes for
fewer bugs.  

Mistake #1. I didn't say that and you inferred rather than my
implying it.

"Which do you think is easier to read and provides fewer opportunities
for errors?"

I was comparing the two sets of code, not making a general
statement.


....snipped resulting conclusions...
More directly related to what you posted
is the question of the most desirable
granularity to which you should decompose
a problem.  

At least here you don't say that I made a statement about
granularity. This is your topic. I'm not even clear on how this
issue results from the OP's post. I do see how it relates to a wider
conversation about coding in general, but not how it connects to me or
my post. What did I write in regards to this?

....snip resulting discussion...
Despite all this fence-sitting, there is
something that seems obvious to me.  
Breaking a design into excessively small
pieces (transistors!!) clearly obscures
its functionality.  Leaving a design in
huge monolithic chunks (an entire FPGA
in one VHDL process!!) is clearly hopeless
too; no-one could possibly understand it.
Somewhere in the middle there is an
optimum - not ideal, but certainly better
than either end of that spectrum.  Merely
saying "simpler is better" is inadequate.

I don't agree that any of the three approaches is the "right" one.
Rather to understand the design you need to understand as many levels
as are important to the design. Typically there are parts of an HDL
design that I want to see from the 10,000 m view, some I want to see
from 10 m, some while sitting in front of it and some I want to see
with a microscope. It all depends on which parts are standard,
straightforward stuff and which parts are doing something more complex
that needs to be explained more clearly. There are many times I use
concurrent code because adding it to a process adds nothing to the
clarity. It is still an assignment, but now, for example, a data path
mux is obscured by all the control logic statements or vice versa.
But mostly I just don't use combinatorial processes except for
exceptions where it makes the code more clear.

For me, pieces of design small enough to
write as a single concurrent statement are
almost never big enough to give me useful
clues about how they contribute to the
overall functionality (unless you put a
function call in the expression).

I'm curious, how do you write, for example, a data path mux if you
don't put it in a concurrent statement? Do you lump it in with
unrelated stuff? I had a design for a mulaw encoder with two sources,
the data from the CODEC and a tone generator as well as a mute
function. I added the data path mux (with mute function) using a
combinatorial statement because to me it was not part of the mulaw
logic so I didn't want to put it in the process for the mulaw logic.
In fact, all of the data path to connect the mulaw logic with the
CODEC and the IP interface was concurrent statements, some just simple
assignments with no logic to connect wires... er, I mean signals. Can
you tell I've been working with Verilog lately?

What would you have done differently?

Funny, I think is was one time I used variables for the mulaw encoding
because that seemed like the right way to go given that it was all
combinatorial and required several sequential steps. It also
translated easier from the C code I used as a reference. It had it's
own test bench so debugging with variables was not really an issue.

Sorry about the lengthy ramblings.  You
did ask for a justification :)

Yup, be careful what you ask for... ;^)

You are always good for some interesting perspectives. Thanks.

Rick
 
M

Martin Thompson

rickman said:
Please show me how to do this. I have added variables to the waveform
window until I was blue in the face and their values never show up.
It makes some sense. Variables are not defined outside of the
process, function or procedure and in many cases do not retain a value
between invocations. So what would be displayed?

I'm not an Aldec user, but on a brief perusal of the manual, I can't see how to
do it either :( How disappointing!
If I set a breakpoint and single step, I can see the variables in the
Call Stack window and watch them change. But they never show up in
the waveform. That further makes sense since a variable can change
value several times with no time ticking off. How would you display
that in a time based waveform?

Modelsim shows the value at the end of the process - effectively it wires it to
a signal at the end of the process and displays that.
Too bad they don't have signals in C... Sometimes I turn variables
into signals by outputting them on pins which can be displayed as
waveforms by a logic analyzer. ;^)

Yep, BTDT too :)

Cheers,
Martin
 
A

Andy

I'm curious, how do you write, for example, a data path mux if you
don't put it in a concurrent statement?  Do you lump it in with
unrelated stuff?  I had a design for a mulaw encoder with two sources,
the data from the CODEC and a tone generator as well as a mute
function.  I added the data path mux (with mute function) using a
combinatorial statement because to me it was not part of the mulaw
logic so I didn't want to put it in the process for the mulaw logic.
In fact, all of the data path to connect the mulaw logic with the
CODEC and the IP interface was concurrent statements, some just simple
assignments with no logic to connect wires... er, I mean signals.  Can
you tell I've been working with Verilog lately?

What would you have done differently?

It depends... How is the datapath (mux) controlled? Is it some
external control, or is it a mode within the encoder? If the encoder
has a requirement that under certain internal modes or conditions, it
needs to load data from a different source, then I generally code that
behavior into the encoder. On the other hand, if something external is
directly controlling the source of the data, then I might break that
functionality out into a separate mux. And if that mux was the only
thing related to that control, I might even make it a concurrent
statement.

The focus is more about the function than it is about the circuit that
will implement the function. Think of the function not as a mux but as
a choice of which data to load. Then asks questions like "who/what
determines (not implements) the choice?" The answer will often lead to
the appropriate coding. Also, a functional approach will more often
lead to closer coupling between that code and the requirements
documents for that code. Good requirements don't usually include
"shall have a mux to select data...," but more like "shall load
different data based on..."

Andy
 
R

rickman

It depends... How is the datapath (mux) controlled? Is it some
external control, or is it a mode within the encoder? If the encoder
has a requirement that under certain internal modes or conditions, it
needs to load data from a different source, then I generally code that
behavior into the encoder. On the other hand, if something external is
directly controlling the source of the data, then I might break that
functionality out into a separate mux. And if that mux was the only
thing related to that control, I might even make it a concurrent
statement.

The mux is controlled by configuration register settings in a
different module, but also has a real time Left/Right control. Really
the mux is completely independent of the mulaw encode/decode. I wrote
that up as a independent, reusable module and instantiate it.

The focus is more about the function than it is about the circuit that
will implement the function. Think of the function not as a mux but as
a choice of which data to load. Then asks questions like "who/what
determines (not implements) the choice?" The answer will often lead to
the appropriate coding. Also, a functional approach will more often
lead to closer coupling between that code and the requirements
documents for that code. Good requirements don't usually include
"shall have a mux to select data...," but more like "shall load
different data based on..."

Yes, exactly. Many of these independent functions that can be made
peripheral to a given function are coded as concurrent statements.
But there are times when I use concurrent signals to support easier
debugging. I don't recall the exact portion of the design, but I had
another function in this same design that was largely arithmetic. To
facilitate debugging I coded each step as concurrent so that each
intermediate value was a signal. Even if Active HDL would display
variables, they can't really be viewed properly if they are not
updated in a signal like manner. How do you view a sum of products
when it is updated in a loop as a variable? It all happens in one
instant in time when viewed in a waveform. Yes, you can use
breakpoints and such, but that is a separate issue.

Maybe there are things I can learn about this. I'll give code
debugging (vs waveform debugging) another try next time I code up some
HDL.

Rick
 
K

KJ

Maybe there are things I can learn about this.  I'll give code
debugging (vs waveform debugging) another try next time I code up some
HDL.

Just curious here...
- Would you turn off your monitor and use a speech synthesizer to read
what is on your display to you?
- Do you close the source window (and not view the source code via an
editor) when you're debugging HDL designs today?

Presuming the answer to these questions is 'no', then what do you
think is to be gained by not using information that is currently
available to you? Do you think you would be more productive? If so,
can you explain why?

I'm making the assumption with these question that when you say "I'll
give code debugging (vs waveform debugging) another try" that this
would mean, among other things, not displaying waveforms, instead
using only the source code window and their tools. Which leads to the
questions above about why you think it would be better to not use
information that is available to solve the problem at hand.

But of course, based on your postings, you seem a bit more practical
than that and the reality is that you probably mean that you would try
supplementing using waveforms with source code tools such as
breakpoints and single stepping and such. On that premise...

- If a problem occurs, do you typically try to solve it by waiting for
it to happen again?
- Even if, in your experience, it has turned out that 'waiting for it
to happen again' will not usually mask the true problem and allows you
to fix the problem, do you think that is a good methodology?
- When you've solved problems in the past, are you always (or almost
always) able to solve it using no other information than what is
currently available right now? Looking only at present signal and
variable values at the time of the failure, with no history of what
led up to the event other that what you must infer by knowledge of the
design and must retain in your head (or perhaps scribbled down on
paper) since there is no history that can be displayed?

Presumably the answer to all of these questions is also 'no' which
suggests that making use of information that is readily available in
whatever form would be a 'good thing' that most anyone with experience
would make use of when solving the problem at hand...which then brings
us around to the wrap up...

Othen than if you've measured and found that logging signal activity
to the disk during sim to be too high of a cost, then there really is
no rationale to saying "code debugging vs waveform debugging". I'm
not trying to slam you or anyone here on methods, being proficient in
using source code tools is not a handicap. But those source code
tools can only be applied to events *after* the bad thing has
happened, they are of no help in diagnosing what has *already* ocurred
short of restarting the sim (i.e. turning back the clock) so you must
live with that limitation and work around it.

Source code tools do allow you to step through and watch things
unfold...but that would be using those tools for verification rather
than debug....personally, I prefer testbenches for that task.

Kevin Jennings
 
C

Chris Higgs

I'm not an Aldec user, but on a brief perusal of the manual, I can't see how to
do it either :(  How disappointing!  

IIRC, compiling with the debug flag (acom -dbg) will allow variables
to be traced in the same way as signals.
 
R

rickman

IIRC, compiling with the debug flag (acom -dbg) will allow variables
to be traced in the same way as signals.

I use the GUI and in checking the preferences I see the compiler
option "Enable Debug" is checked. Is that what you mean? I'm still
not able to view variable as waveforms.

Rick
 
C

Chris Higgs

I use the GUI and in checking the preferences I see the compiler
option "Enable Debug" is checked.  Is that what you mean?  I'm still
not able to view variable as waveforms.

Rick

Using Riviera-PRO on Linux (YMMV on other Aldec simulators/platforms)

View->Debug Windows enable Hierarchy Viewer (shortcut alt + 5) and
Object Viewer (alt + 6)

After elaboration, navigate to the appropriate process and the
variables declared by that process will show up in Object Viewer.
Right click->Add to->Waveform.

Alternatively use the command "wave sim:/path/to/your/process/
variable"
 
R

rickman

Using Riviera-PRO on Linux (YMMV on other Aldec simulators/platforms)

View->Debug Windows enable Hierarchy Viewer (shortcut alt + 5) and
Object Viewer (alt + 6)

After elaboration, navigate to the appropriate process and the
variables declared by that process will show up in Object Viewer.
Right click->Add to->Waveform.

Alternatively use the command "wave sim:/path/to/your/process/
variable"

My UI is not the same, but in the process of messing about with it to
see if it comes close to yours, I found how to do it. The Design
Browser sounds like it is similar to your Hierarchy Viewer. Once I
select the appropriate process the variables are available to add to
the waveform. It seems odd that I can add signals to the waveform
display from the source file, but not variables.

This will help with debugging variables, but it still does not
supplant the Call Stack and breakpoints because variables update in
zero time and so waveforms won't show everything that happens with
them.

Rick
 
J

Jonathan Bromley

[variables in the wave view]
will help with debugging variables, but it still does not
supplant the Call Stack and breakpoints because variables update in
zero time and so waveforms won't show everything that happens with
them.

But this is a red herring. Waveforms can't show
everything that happens with signals, either.

A signal's driver can be updated many times in a
given delta cycle, and only the last such update
will actually affect the signal in the upcoming
delta. (Of course the story isn't even that
simple in the general case, but for zero-delay
RTL code it's close.) You can't see these driver
updates unless you watch the code executing.

I know, of course, that people tend to do more
complicated things with variables than they do
with repeated assignment to signals, so the
problem may be more severe with variables in
practice. But there isn't much difference
in principle.

The truth is that we EEs have grown accustomed
to a level of visibility of our code's activity
that makes little sense to software people, for
whom invisible variables are a matter of routine.
We get that visibility only because almost all
the objects whose values we care about are static
and can be traced/dumped easily as a function of
simulation time. As soon as you start to do
anything interesting and software-like, you
can't do that quite so easily and it becomes
much more important to be able to trace code
execution - there's no shortage of ways to do it.

Better still is to be able to reason about your
code so that you can think your way through the
offending code's behaviour to see where you
messed-up. For me, a great way to achieve that
is to add plenty of assertions; figuring out
how to write the assertions is a powerful
encouragement to think rigorously about your
code, and if you find you *can't* write assertions
that make sense, it's a fair bet that the code
in question isn't properly designed. And well-
written assertions often find the cause of an
error long before its effect would be visible
in waves.
 
K

KJ

After elaboration, navigate to the appropriate process and the
variables declared by that process will show up in Object Viewer.
Right click->Add to->Waveform.

Alternatively use the command "wave sim:/path/to/your/process/
variable"

When you do that, does it show the entire history of the variable from
t=0 until t=now? Or does it only allow you to see the variable from
t=now until t=future?

KJ
 
C

Chris Higgs

When you do that, does it show the entire history of the variable from
t=0 until t=now?  Or does it only allow you to see the variable from
t=now until t=future?

The variable is traced from t=now onwards (this is the same behaviour
as tracing signals). Recording everything in the database in case you
want to retrospectively view it (to allow t=0 until t=now) sounds
expensive! There may be an option to do that but I've never needed it
- I trace whatever signals/variables (using wildcards) before starting
the simulation.
 
K

KJ

The variable is traced from t=now onwards (this is the same behaviour
as tracing signals). Recording everything in the database in case you
want to retrospectively view it (to allow t=0 until t=now) sounds
expensive! There may be an option to do that but I've never needed it

OK, good to know. For Modelsim, the 'log -r /*' logs all signals to
disk (one can also be more specific about which signals if one
chooses). I agree it *seems* like it should be expensive, but I
haven't really found that to be the case. Then when I need a signal
for debug it can be added to the wave window and the entire history is
displayed. Having the entire history of every signal available to be
waved is mighty handy.
- I trace whatever signals/variables (using wildcards) before starting
the simulation.

Since I don't generally know what assertion will fail ahead of time, I
don't know what signals I would be interested in prior to starting the
simulation. I dislike restarting and re-running simulations because I
guessed wrong about which signals I might want.

Kevin Jennings
 
R

rickman

[variables in the wave view]
will help with debugging variables, but it still does not
supplant the Call Stack and breakpoints because variables update in
zero time and so waveforms won't show everything that happens with
them.

But this is a red herring.  Waveforms can't show
everything that happens with signals, either.

A signal's driver can be updated many times in a
given delta cycle, and only the last such update
will actually affect the signal in the upcoming
delta.  (Of course the story isn't even that
simple in the general case, but for zero-delay
RTL code it's close.)  You can't see these driver
updates unless you watch the code executing.

I think you misrepresent what happens with a signal. Although many
assignments can be made to a signal, it is never updated until the
process reaches a stopping point, either a wait or the end of the
process. Only then is the value of the signal updated. You may feel
this is semantics, but the point is that I don't care about
assignments that don't impact the value of the signal, the intention
of the code is for them to be ignored.

A variable is different. It is updated just like a variable in
software where each intermediate value can be significant. If I can't
see those intermediate values, I have lost information about the
process which can make it harder to debug.

Perhaps there is value in single stepping multiple assignments to a
signal if you want to debug the code making those assignments. But
the way I write code this is seldom and issue. About the only time I
have multiple assignments to a signal in a process is when I first
assign a default value and later assign another value in specific
instances. This is not complex to debug and does not require single
stepping or breakpoints. Waveform viewing works just fine for that.

I know, of course, that people tend to do more
complicated things with variables than they do
with repeated assignment to signals, so the
problem may be more severe with variables in
practice.  But there isn't much difference
in principle.

In theory, theory and practice are the same; in practice they can
differ considerably. The reality is that I very seldom use
breakpoints and single stepping to debug signals. My logic design
methods are easy to debug using waveforms. I don't see any reason to
make that more complex than it is.

The truth is that we EEs have grown accustomed
to a level of visibility of our code's activity
that makes little sense to software people, for
whom invisible variables are a matter of routine.
We get that visibility only because almost all
the objects whose values we care about are static
and can be traced/dumped easily as a function of
simulation time.  As soon as you start to do
anything interesting and software-like, you
can't do that quite so easily and it becomes
much more important to be able to trace code
execution - there's no shortage of ways to do it.

Better still is to be able to reason about your
code so that you can think your way through the
offending code's behaviour to see where you
messed-up.  For me, a great way to achieve that
is to add plenty of assertions; figuring out
how to write the assertions is a powerful
encouragement to think rigorously about your
code, and if you find you *can't* write assertions
that make sense, it's a fair bet that the code
in question isn't properly designed.  And well-
written assertions often find the cause of an
error long before its effect would be visible
in waves.

I use assertions in my test benches. I don't use them in my target
code. Maybe there are things I can learn about that idea.

Rick
 
K

KJ

I use assertions in my test benches.  I don't use them in my target
code.  Maybe there are things I can learn about that idea.

Assertions should also be placed wherever possible, definitely not
restricted to testbench code. What you get then is a self checking
design which is even better but usually not as comprehensive as a self
checking testbench.

Assertions in the design can be thought of as 'better' in that they
will be active and checked which each and every instantiation of that
widget, not just in the original testbench for the widget.

Assertion in the design are usually 'not as comprehensive' in that it
is not always practical to compute all of the outputs within the
design itself. Interface handshake signal protocols can almost always
be checked, the data path might not short of writing a second copy of
the code. As an example, if you were to write the code for a JPEG
encoder, you would probably be better off validating correct operation
by reading files that have been computed by some separate widely used
tool that presumably has a few miles under its belt. This type of
thing though would best be put into a testbench for the encoder. If
you put that form of checking into the encoder than each and every
instantiation would have to somehow get the file name inputs through
some private interface.

In any case, it quickly becomes obvious which things 'could' be
checked in the design and which probably should not...and then put
those that 'could' be checked into the design so that they will be
checked forever and for always. You'll get a more robust design since
your self-checking design will catch bugs (or validate correct
operation) as that design gets reused in other applications.

On the other hand, if you don't develop any reusable widgets, it
probably doesn't matter where you put your assertions.

Kevin Jennings
 
M

Mike Treseler

My UI is not the same, but in the process of messing about with it to
see if it comes close to yours, I found how to do it. The Design
Browser sounds like it is similar to your Hierarchy Viewer. Once I
select the appropriate process the variables are available to add to
the waveform. It seems odd that I can add signals to the waveform
display from the source file, but not variables.

Not odd at all.
Suppose that two processes
each had a variable named cnt_v.
Since the variables are not the same,
the only way to properly
label the waves is by process.
This will help with debugging variables, but it still does not
supplant the Call Stack and breakpoints because variables update in
zero time and so waveforms won't show everything that happens with
them.

In a synchronous design,
the value of the wave is probably the one I want.
If I really need to see the
"gate by gate" (delta by delta) value,
I trace code and break on a variable value.

-- Mike Treseler
 
R

rickman

Not odd at all.
Suppose that two processes
each had a variable named cnt_v.
Since the variables are not the same,
the only way to properly
label the waves is by process.

Why wouldn't the tool know which process a variable is in from the
source??? Everything else comes from the source...

In a synchronous design,
the value of the wave is probably the one I want.
If I really need to see the
"gate by gate" (delta by delta) value,
I trace code and break on a variable value.

Not delta by delta, but yes, gate by gate flow. That is what I'm
saying. You don't get a choice with variables. If you need to see
how they are calculated when used for intermediate values you have to
trace the code. With signals every time the value changes, it is
reflected in the state of the signal in the waveform display and you
only need to look at the source once you have found the location of
the problem.

Of course there is no one size fits all, but waveforms seem to be the
best approach for most problems.

Rick
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top