I'd rather switch than fight!

G

glen herrmannsfeldt

In comp.arch.fpga rickman said:
On Apr 17, 7:17?pm, glen herrmannsfeldt <[email protected]> wrote:
(snip on test benches)
That is the point. Finding timing violations in a simulation is hard,
finding them in physical hardware is not possible to do with any
certainty. A timing violation depends on the actual delays on a chip
and that will vary with temperature, power supply voltage and process
variations between chips.

But they have to be done for ASICs, and all other chips as
part of the fabrication process. For FPGAs you mostly don't
have to do such, relying on the specifications and that the chips
were tested appropriately in the factory.
I had to work on a problem design once
because the timing analyzer did not work or the constraints did not
cover (I firmly believe it was the tools, not the constraints since it
failed on a number of different designs). We tried finding the chip
that failed at the lowest temperature and then used that at an
elevated temperature for our "final" timing verification. Even with
that, I had little confidence that the design would never have a
problem from timing. Of course on top of that the chip was being used
at 90% capacity. This design is the reason I don't work for that
company anymore. The section head knew about all of these problems
before he assigned the task and then expected us to work 70 hour work
weeks. At least we got them to buy us $100 worth of dinner each
evening!

One that I worked with, though not at all at that level, was
a programmable ASIC (for a systolic array processor). For some
reason that I never knew the timing was just a little bit off
regarding to writes to the internal RAM. The solution was to use
two successive writes, which seemed to work. In the usual operation
mode, the RAM was initialized once, so the extra cycle wasn't much
of a problem. There were also some modes where the RAM had to
be written while processing data, such that the extra cycle meant
that the processor ran that much slower.
The point is that if you don't do static timing analysis (or have an
analyzer that is broken) timing verification is nearly impossible.

And even if you do, the device might still have timing problems.

(snip)
If you find a timing bug in the ASIC chip, isn't that a little too
late? Do you test at elevated temperature? Do you generate special
test vectors? How is this different from just testing the logic?

It might be that it works at a lower clock rate, or other workarounds
can be used. Yes, it is part of testing the logic.

(snip)
The number of clocks is irrelevant. I don't consider timing issues of
crossing clock domains to be "timing" problems. There you can only
solve the problem with proper logic design, so it is a logic
problem.

Yes, there is nothing to do about asynchronous clocks. It just has
to work in all cases. But in the case of supposedly related
clocks, you have to verify it. There are designs that have one
clock a multiple of the other clock frequency, or multiple phases
with specified timing relationship. Or even single clocks with
specified duty cycle. (I still remember the 8086 with its 33% duty
cycle clock.)

With one clock you can run combinations of voltage, temperature,
and clock rate, not so hard but still a lot of combinations.
With related clocks, you have to verify that the timing between
the clocks works.

-- glen
 
R

rickman

Self-documenting and more.

I look at many aspects of a strongly typed language as encouraging or
requiring "verbosity" that would otherwise need comments to explain.
Hence we have different types for vectors of the same base element
type, based on how the contents are to be interpretted, and/or
limitations of the data contained. Integer subtypes allow you to
constrain (and verify) the contents more thoroughly, at least
numerically.

By choosing the appropriate data types, you are telling the tool and
subsequent reviewers/users/maintainers something about your code, and
how it works (and sometimes more importantly, how it doesn't or won't
work.)

I used to use boolean for most signals that were used as controls for
ifs and such. But in the simulator these are displayed as values
rather than the oscope type trace used for std_logic which I find much
more readable. So I don't use boolean. I seldom use enumerated
types. I find nearly everything I want to do works very well with
std_logic, unsigned, signed and integers with defined ranges. I rely
on comments to explain what is going on, because when it is not clear
from reading the code, I think there is little that using various
types will add to the picture.

By using assertion statements, you can not only document assumptions
and limitations of your code, but also ensure that those are met.

I've never uses assertions in my synthesized code. I hate getting
warnings from the tools so I don't like to provoke them. There are
times I use assignments in declarations of signed or unsigned types to
avoid warnings I get during simulation. But then these produce
warnings in synthesis, so you can't always win.

Can you give an example of an assertion you would use in synthesized
code?

Some of these features are enforced by the compiler, some are enforced
by standards compliant implementations. But they are enforced, which
is more than we can say about comments. How many times have you seen
comments that were clearly written for a previous version of the code,
but not completely updated in later revisions to that code?

I only worry about my comments...

Rick
 
A

Andy

I don't know of any assertions that are used by the synthesis tool,
but I do use assertions in my RTL to help verify the code in
conjunction with the testbench.

The most common occurrence is with counters. I can use integer
subtypes with built-in bouds checking to make sure a counter never
overflows or rolls over on its own (when the surrounding circuitry
should never allow that to happen, but if it did, I would want to know
about it first hand). Or I can use assertion statements with unsigned
when the allowable range for a counter is not 0 to 2**n-1.

If I know that a set of external interface strobe inputs should be
mutually exclusive, and I have optimized the circuit to take advantage
of that, then I use an assertion to verify it. It would be nice if the
synthesis tool recognized the assertion, and optimized the circuit for
me, but I'll take what I can get.

I'm rarely concerned with what waveforms look like, since the vast
majority of my debugging is with the source level debugger, not the
waveform viewer.

I suspect that if your usage is constrained to the data types you
listed (except bounded integer subtypes), you may do well with
verilog. But given that you may not use a lot of what is available in
VHDL, it would be worthwhile to compare the productivity gains from
using more of the capabilities of the language you already know, to
the gains from changing to a whole new language.

Andy
 
C

Cesar

And much of the "verboseness" in VHDL can be mitigated with tools like
Emacs or Sigasi's product.  And much of the other perceived
verboseness can be overcome by writing "modern" code: single process,
using variables, functions, procedures (the sort of thing some of us
do all the time!)

BTW, it's long time I hear about VHDL "modern" code and the methods
you enumerate. Since typical VHDL books do not deal with coding style,
do you know about any VHDL book explaining this modern coding style?

César
 
M

Martin Thompson

Andy said:
Self-documenting and more.

I look at many aspects of a strongly typed language as encouraging or
requiring "verbosity" that would otherwise need comments to explain.
Hence we have different types for vectors of the same base element
type, based on how the contents are to be interpretted, and/or
limitations of the data contained. Integer subtypes allow you to
constrain (and verify) the contents more thoroughly, at least
numerically.

By choosing the appropriate data types, you are telling the tool and
subsequent reviewers/users/maintainers something about your code, and
how it works (and sometimes more importantly, how it doesn't or won't
work.)

Indeed so - capturing the knowledge that gets lost when you just use a
"bag-of-bits" type for everything.
By using assertion statements, you can not only document assumptions
and limitations of your code, but also ensure that those are met.

Yes, I sprinkle assertions through both RTL and testbench code - they
usually trigger when I come back to the code after several weeks ;)
Some of these features are enforced by the compiler, some are enforced
by standards compliant implementations. But they are enforced, which
is more than we can say about comments. How many times have you seen
comments that were clearly written for a previous version of the code,
but not completely updated in later revisions to that code?

Well said - I had a feeling that was what you meant by
compiler-enforced comments: it's the whole of the code which should be
used as documentation.

Cheers,
Martin
 
M

Martin Thompson

rickman said:
Or do you think it has to do with the fact that the tools do a better
job so that the efficiencies of using assembly language and
instantiating components is greatly reduced?

Well, that's the reason we *had* to use assembly in the past, the
tools weren't up to the job of *allowing* us to specify the behaviour
in a higher-level way. I know very few people who would choose to
write assembly unless they have a problem they just can't solve in C
for example.
I am going to give Emacs a try this summer when I have more free
time.

Emacs is a quite a big investment, but it's a tool for life. I wonder
what looks I'll garner when I'm still using it in 20 years time :)
I don't see the things you mention as being a solution because
they don't address the problem.

What is the problem - that there's a lot of typing? If that's it then
they *do* solve the problem. VHDL-mode pretty much makes it so you
only have to type in the bits that really matter, all the boilerplate
is done for you (up to and including testbenches if you want).
The verboseness is inherent in VHDL.

To some extent it is, but it's there to be used (as Andy pointed out).
VHDL has a lot less overhead when you use a single process inside each
entity. And the instantiation matching ports to wires (which looks
terribly verbose and is a pain to do in a copy/paste style) stops a
lot of silly "positioning" mistakes. (You could always choose to just
do a positional instantiation IIRC).
Type casting is something that makes it verbose.

Yes, but it forces you to show you know what you're doing, rather than
the compiler just allowing you to do it, and it works for now, but in
the future some corner-case comes up which breaks the implicit
assumptions that you have put in the code. With strong typing, you
have to be explicit about the assumptions.
That can often be mitigated by using the right types in the right
places.

I'd say it's almost always mitigated by using the right types in the
right places. The times it causes me pain is when I have to
instantiate a 3rd party entity (usually an FPGA-vendor RAM) which has
bag-of-bits vectors everywhere, (even on the address lines for example).
I never use std_logic_vector anymore.

I wouldn't go that far, but I use them a lot less than some others do
:)

Cheers,
Martin
 
M

Martin Thompson

Cesar said:
BTW, it's long time I hear about VHDL "modern" code and the methods
you enumerate. Since typical VHDL books do not deal with coding style,
do you know about any VHDL book explaining this modern coding style?

Sorry, no.

Maybe some of us should take some of the sample code from the classic
texts and rewrite it "our way" :)

Cheers,
Martin
 
N

Nial Stewart

The point is that if you don't do static timing analysis (or have an
And even if you do, the device might still have timing problems.


Can you expand on this Glen?

As I have always understood it one of the bedrocks of FPGA design is that
when it's passed a properly constrained static timing analysis an FPGA design
will always work (from a timing point of view).



Nial.
 
G

glen herrmannsfeldt

Can you expand on this Glen?
As I have always understood it one of the bedrocks of FPGA design
is that when it's passed a properly constrained static timing
analysis an FPGA design will always work (from a timing point of view).

Well, some of the comments were regarding ASIC design, where
things aren't so sure. For FPGA designs, there is, as you say,
"properly constrained" which isn't true for all design and tool
combinations. One that I have heard of, though haven't actually
tried, is having a logic block where the delay is greater than
one clock cycle, but less than two. Maybe some tools can do that,
but I don't believe that all can.

-- glen
 
R

rickman

glen said:
(snip on test benches)



But they have to be done for ASICs, and all other chips as
part of the fabrication process. For FPGAs you mostly don't
have to do such, relying on the specifications and that the chips
were tested appropriately in the factory.

I don't follow your reasoning. Why is finding timing violations in
ASICs any different from FPGA? If the makers of ASICs can't
characterize their devices well enough for static timing analysis to
find the timing problems then ASIC designers are screwed.

One that I worked with, though not at all at that level, was
a programmable ASIC (for a systolic array processor). For some
reason that I never knew the timing was just a little bit off
regarding to writes to the internal RAM. The solution was to use
two successive writes, which seemed to work. In the usual operation
mode, the RAM was initialized once, so the extra cycle wasn't much
of a problem. There were also some modes where the RAM had to
be written while processing data, such that the extra cycle meant
that the processor ran that much slower.


And even if you do, the device might still have timing problems.

You keep saying that, but you don't explain.
It might be that it works at a lower clock rate, or other workarounds
can be used. Yes, it is part of testing the logic.

(snip)



Yes, there is nothing to do about asynchronous clocks. It just has
to work in all cases. But in the case of supposedly related
clocks, you have to verify it. There are designs that have one
clock a multiple of the other clock frequency, or multiple phases
with specified timing relationship. Or even single clocks with
specified duty cycle. (I still remember the 8086 with its 33% duty
cycle clock.)

With one clock you can run combinations of voltage, temperature,
and clock rate, not so hard but still a lot of combinations.
With related clocks, you have to verify that the timing between
the clocks works.

But you can't verify timing by testing. You can never have any level
of certainty that you have tested all the ways the timing can fail.
If the clocks are related, what exactly are you testing, that they
*are* related? Timing is something that has to be correct by
design.

Rick
 
M

mike v.

I also use seperate sequential and combinatorial always blocks. At
first I felt that I should be able to have just a single sequential
block but quickly became accustomed to 2 blocks and it now feels
natural and I don't think it limits my ability to express my intent at
all. Most of the experienced designers I work with use this style but
not all of them.
 
K

Kim Enkovaara

glen said:
combinations. One that I have heard of, though haven't actually
tried, is having a logic block where the delay is greater than
one clock cycle, but less than two. Maybe some tools can do that,
but I don't believe that all can.

Just normal multicycle path, has been normal thing in tools for
a long time. At least Altera, Xilinx, Synplify, Primetime and
Precision support it.

--Kim
 
K

Kim Enkovaara

rickman said:
But you can't verify timing by testing. You can never have any level
of certainty that you have tested all the ways the timing can fail.

Especially with ASIC you can't verify the design by testing. There are
so many signoff corners and modes in the timing analysis. The old
worst/best case in normal and testmode are long gone. Even 6 corner
analysis in 2+ modes is for low end processes with big extra margins.
With multiple adjustable internal voltage areas, powerdown areas etc.
the analysis is hard even with STA.

--Kim
 
N

Nial Stewart

Well, some of the comments were regarding ASIC design, where
things aren't so sure. For FPGA designs, there is, as you say,
"properly constrained" which isn't true for all design and tool
combinations. One that I have heard of, though haven't actually
tried, is having a logic block where the delay is greater than
one clock cycle, but less than two. Maybe some tools can do that,
but I don't believe that all can.


As Kim says multi-cycle paths have been 'constrainable' in any FPGA
took I have used for as long as I can remember.


Nial.
 
P

Patrick Maupin

Especially with ASIC you can't verify the design by testing. There are
so many signoff corners and modes in the timing analysis. The old
worst/best case in normal and testmode are long gone. Even 6 corner
analysis in 2+ modes is for low end processes with big extra margins.
With multiple adjustable internal voltage areas, powerdown areas etc.
the analysis is hard even with STA.

For the record, I agree that lots of static analysis is necessary
(static timing, model checking, etc.) The thesis when I started this
sub-thread is that what the *language* gives you (VHDL vs. verilog) is
such a small subset of possible checking as to be unuseful. I will
now add that it comes at a huge cost (in coding things just right).

Regards,
Pat
 
A

Andy

Other than twice the declarations, unintentional latches, explicitly
coding clock enables, simulation penalties, etc., using separate
combinatorial and sequential blocks is just fine.

Most designers here use single clocked processes / always blocks.
Those that don't are 'encouraged' to.

Andy
 
P

Patrick Maupin

Other than twice the declarations, unintentional latches, explicitly
coding clock enables, simulation penalties, etc., using separate
combinatorial and sequential blocks is just fine.

Unintentional latches don't happen if you use a consistent naming
style with, e.g. 'next_x' and 'x'.

I don't think simulation penalties happen if the simulator is halfway
decent.

Twice the declarations is a slight issue, but if you do reg [2:0] x,
next_x; it's not too bad at all.

Explicitly coding clock enables -- not sure what you mean here --
that's an if statement no matter how you slice it.

Regards,
Pat
 
K

KJ

Unintentional latches don't happen if you use a consistent naming
style with, e.g. 'next_x' and 'x'.

Ha, ha, ha...having a convention naming style prevents latches???!!!

Ummmmmm, noooooooo, don't think so, but thanks for the possibly
unintended humor.

Whether you have a naming convention or not, latches will be created
when assignment statements to cover every path are not included in an
unclocked process totally independent of how they are named...end of
story

I suppose your point is that calling things 'next_x' and 'x' then
makes it easier to do a *manual* inspection and perhaps catch such a
missing assignment but that is a far, far stretch from your actual
statement "Unintentional latches don't happen if...". Andy, on the
other hand, is on much firmer ground had he said "Unintentional
latches don't happen if you only use clocked processes"...he didn't
explicitly say that, but I'm fairly certain he would agree.

Yes, you can do things to make *manual* inspections better...which is
like saying it hurts my head less if I use a rubber hammer to hit my
head than a steel one...but it is a far better process improvement to
just not hit my head at all with any hammer.

KJ
 
P

Patrick Maupin

I suppose your point is that calling things 'next_x' and 'x' then
makes it easier to do a *manual* inspection and perhaps catch such a
missing assignment but that is a far, far stretch from your actual
statement "Unintentional latches don't happen if...".  Andy, on the
other hand, is on much firmer ground had he said "Unintentional
latches don't happen if you only use clocked processes"...he didn't
explicitly say that, but I'm fairly certain he would agree.

Yes, I should have been more clear about that. Any decent synthesizer
or simulation tool will report latches, but sometimes the reports are
hard to interpret. If you use a consistent naming convention like I
have described, it is easy to find the latches, and also easy to write
a script to find them, as well.

And I agree that you won't have latches if all your processes are
clocked, but latches are much easier to detect and rectify than some
other possible logic problems.

Regards,
Pat
 
K

KJ

Yes, I should have been more clear about that.

Agreed, you're not very clear when you have statements like this from
your previous post...
Unintentional latches don't happen if you use a consistent naming
style with, e.g. 'next_x' and 'x'.

followed up with statements like this...
If you use a consistent naming convention like I
have described, it is easy to find the latches,

So, first you say that the naming convention by itself will prevent
unintentional latches and then follow that up to say that the naming
convention helps you to *find* the unintended latches that couldn't be
created in the first place...hmmm....yes, I agree, not very clear.

Both statements indicating that you may be oblivous to the simple
point that using non-clocked processes opens you up to making it easy
to create your own problems (i.e. the latches) that are easily avoided
in the first place...
And I agree that you won't have latches if all your processes are
clocked,

Oop, I guess not because now it seems that you do get the point that
clocked processes for the most part avoid the unintended latch...but
based on the earlier comments I guess you must not practice it or
something for some reason...You admit it avoids a potential design
problem, but don't use it because....hmmmm....well, perhaps you have a
sponge hammer...

Ah well...as long as there are textbooks I guess there will always be
those disciples that think that separating combinatorial logic from
the register description actually is of some value to somebody,
somewhere at some point in time...but inevitably they come up short
when trying to demonstrate that value.
easy to find the latches, and also easy to write
a script to find them, as well

Then they will trot out the methods they use to minimize the problem
that others do not even have.

While those disciples are steadfast in their belief, it usually
doesn't get across to them that the value they perceive is actually
negative, it is costing them...and they are left clinging to the only
thing they have left which is always a statement of the form "That's
the way I do it, I'm comfortable with it, I feel I'm productive doing
it this way"

Every now and then, it seems like good sport to challenge those folks
to see if they have anything better to offer, but it never seems to
be.
but latches are much easier to detect and rectify than some
other possible logic problems.
And much easier to avoid in the first place too...with the correct
methodology (hint: that would be the one that avoids using unclocked
processes)

Just having fun...like I said, every now and then it's good sport to
poke fun at the people who make their own problems.

Kevin Jennings
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,777
Messages
2,569,604
Members
45,217
Latest member
IRMNikole

Latest Threads

Top