Code Style - Default Value of Signal in Process

A

Analog_Guy

Hi,

Any comments about the following code with regard to setting a default
value
of signal 'a'? It appears that they both compile and synthesize fine,
but are there
any comments related to coding style ... or maybe other factors I am
overlooking?

In this process, 'a' is always assigned a default of '0', before the
value of
'b' is checked. If 'b' is '1', then 'a' ends up getting assigned
twice.

PROCESS (resetn, clock)
BEGIN
IF (resetn = '0') THEN
a <= '0';
ELSIF (clock = '1' AND clock'EVENT) THEN
a <= '0';
IF (b = '1') THEN
a <= '1';
END IF;
END IF;
END PROCESS;


In this process, 'a' is only assigned once, depending on the value of
'b'.

PROCESS (resetn, clock)
BEGIN
IF (resetn = '0') THEN
a <= '0';
ELSIF (clock = '1' AND clock'EVENT) THEN
IF (b = '1') THEN
a <= '1';
ELSE
a <= '0';
END IF;
END IF;
END PROCESS;
 
H

Hubble

Analog_Guy said:
Hi,

Any comments about the following code with regard to setting a default
value
of signal 'a'? It appears that they both compile and synthesize fine,
but are there
any comments related to coding style ... or maybe other factors I am
overlooking?

In this process, 'a' is always assigned a default of '0', before the
value of
'b' is checked. If 'b' is '1', then 'a' ends up getting assigned
twice.

PROCESS (resetn, clock)
BEGIN
IF (resetn = '0') THEN
a <= '0';
ELSIF (clock = '1' AND clock'EVENT) THEN
a <= '0';
IF (b = '1') THEN
a <= '1';
END IF;
END IF;
END PROCESS;


In this process, 'a' is only assigned once, depending on the value of
'b'.

PROCESS (resetn, clock)
BEGIN
IF (resetn = '0') THEN
a <= '0';
ELSIF (clock = '1' AND clock'EVENT) THEN
IF (b = '1') THEN
a <= '1';
ELSE
a <= '0';
END IF;
END IF;
END PROCESS;

Note that signal assignments only update the "projected waveform" of a
signal. The effective value of a is changed by the simulator "one
delta" after the rising edge of the clock, in both cases. If you do

a<='0';
a<='1';

inside a process, this is equivalent to

a<='0' after 0 ns;
a<='1' after 0 ns;

The first assignment plans to set a to 0 in the next simulation cycle.
The second removes this projection and plans to set a to '1' in the
next cycle. The effective value of a is assigned only *once*.

So both examples are functionally equivalent.

Hubble.
 
M

Mike Treseler

Analog_Guy wrote:

PROCESS (resetn, clock)
BEGIN
IF (resetn = '0') THEN
a <= '0';
ELSIF (clock = '1' AND clock'EVENT) THEN
a <= '0';
IF (b = '1') THEN
a <= '1';
END IF;
END IF;
END PROCESS;


In this process, 'a' is only assigned once, depending on the value of
'b'.

PROCESS (resetn, clock)
BEGIN
IF (resetn = '0') THEN
a <= '0';
ELSIF (clock = '1' AND clock'EVENT) THEN
IF (b = '1') THEN
a <= '1';
ELSE
a <= '0';
END IF;
END IF;
END PROCESS;

The two processes are equivalent.
I think the second is easier to read.

-- Mike Treseler
 
K

KJ

Any comments about the following code with regard to setting a default
value
of signal 'a'?

As a general rule I use (and prefer) the second approach (explicitly
assiging 'a' for both the 'if' and 'else' branches) since it tends to
keep all assignments to a signal in the same general area of the source
code (which helps when you're perusing code). At times though I'll
break this general rule if the logic is getting too busy (i.e. many
nested if/case, etc.) and put a second signal assignment at the top as
a default and add a comment something to the effect "Default -- unless
overridden below". I never bend it to the extent that there are
multiple assignments existing in multiple different logic structures.
In this process, 'a' is always assigned a default of '0', before the
value of
'b' is checked. If 'b' is '1', then 'a' ends up getting assigned
twice.

And all that will probably mean is that the simulation could run some
very small fraction slower. I doubt that this time difference would be
at all noticeable for any actual design.

KJ
 
M

Mark McDougall

Mike said:
I think the second is easier to read.

Whilst I acknowledge Mike's knowledge and experience, I humbly disagree.

The issue is perhaps better considered when *many* different signals are
involved and the logic is perhaps a little more complex for assigning
'non-default values'. Another good example is in a state machine,
whereby one can potentially avoid assigning values in many different states.

FWIW my background is actually software, so perhaps that's why I have a
different opinion?!?

Regards,
 
K

KJ

The issue is perhaps better considered when *many* different signals are
involved and the logic is perhaps a little more complex for assigning
'non-default values'. Another good example is in a state machine, whereby
one can potentially avoid assigning values in many different states.

Generally I find in a state machine that the 'default' assignment that is
needed is not some specific default value but instead to simply retain the
current value which of course requires no line of code at all.

Many times when there are only a couple places where assignments that differ
from a default value are needed it is 'usually better' to move those
assignments out of the state machine entirely and simply assign the signal
as being a decode of being in those particular states something like

if ((Current_State = Do_Something) or (Current_State = Do_Something_Else)
then
a <= '1';
else
a <= '0';
end if;

'Usually better' means that in a lot of cases (not all, hence the 'usually')
the signal 'a' does not really need to be a function of the next state of
the state machine it can instead be a function of the current state. If
that is the case, then 'a' will not be involved with all of the next state
decoding logic and will then tend to have higher performance (hence the
'better'). The more signals there are like that cluttering up the state
machine the less understandable/supportable the code tends to become.

Another less tangible 'better' is that the dependencies of signal 'a' are a
lot clearer in the source code since all the assignments to 'a' are
spacially grouped together in the source. This will also tend to clear up
the state machine code as well since now it won't be cluttered with
assignments to signals that really don't actually require the 'next' state
of the state machine but can get by just fine using the 'current' state.

KJ
 
A

Andy

In clocked processes (where I do almost all my coding anyway) I use
either method, depending on the complexity of the code. Especially WRT
state machines, I use default assignments a lot, to keep the FSM code
itself a little cleaner.

In combinatorial processes, default assignments to every signal are the
simplest, safest way to avoid latches. I've read coding style guides
that emphasize having an "else" for every "if", etc., and yet debugged
a lot of code that followed that rule and still had unassigned signals
resulting in latches. Default assignments are also very easily audited
in code reviews, and then you don't have to worry about "elses" for
every "if", etc.

In general though, I avoid combinatorial processes if at all possible.
With variables, and signal assignments outside the clocked "if" clause
(results in combinatorial logic after the registers), the uses for the
combinatorial process have diminished greatly.

Andy
 
K

KJ

Andy said:
In clocked processes (where I do almost all my coding anyway) I use
either method, depending on the complexity of the code. Especially WRT
state machines, I use default assignments a lot, to keep the FSM code
itself a little cleaner.
Since the 'state machine control' logic in a state machine expresses the
conditions under which the 'current state' is supposed to change to some
other state, the default assignments are of no use there since the 'default'
is to stay in the current state (whatever state that may be) not in some
hard coded state.

That would imply that when using default assignments in your state machine
code it is for setting other state machine outputs. By intermingling
equations for outputs and equations for state coding together, in some sense
you've already dirtied your state machine code. A decent percentage of the
time though I've found that those other outputs don't really need to depend
on the next state (which would then force the equations for those outputs to
be expressed in the state machine decode logic) but instead could be decoded
off of the current state which means that those equations for the outputs
could be moved completely out of the state machine logic and written more
cleanly all by themselves....and tend to have higher performance to boot.

I don't disagree with you that there are times when you do need to embed
those output equations in with the state machine logic because you really do
need to set them based on the 'next' state and then the use of default
assignments would be useful but I don't think it should happen as you say 'a
lot'.
In combinatorial processes, default assignments to every signal are the
simplest, safest way to avoid latches. I've read coding style guides
that emphasize having an "else" for every "if", etc., and yet debugged
a lot of code that followed that rule and still had unassigned signals
resulting in latches. Default assignments are also very easily audited
in code reviews, and then you don't have to worry about "elses" for
every "if", etc.

Agreed. Here the use of defaults is a much cleaner approach.
In general though, I avoid combinatorial processes if at all possible.
With variables, and signal assignments outside the clocked "if" clause
(results in combinatorial logic after the registers), the uses for the
combinatorial process have diminished greatly.

Diminished to near zero in my opinion.

KJ
 
A

Andy

I agree with you, it just depends on what the "default" behavior of the
output is: is it a pulsed output, which defaults to off, unless the SM
turns it on (and it is up to the SM to keep it on for long enough), or
is it a modal output, where it retains its previous value unless
changed by the SM. In the case of the latter, I usually put a comment
about it in the section where the other defaults are explicitly set.
Either way, the coding in the SM is usually simpler and cleaner that
way.

Andy
 
J

Jim Lewis

Hi,
For simple processes like the one you have shown, I
prefer to use the second process.

However, for statemachines, I default all outputs to their
non-driving value (as you show in the first process)
and then only assign a vlaue to an output when it is the
driving value. If used consistently, this will make your
statemachine code shorter (50% or more).

As long as one or the other coding style is used consistently
(ie: not mixed together), the code is readable. Once you
get used to defaulting a statemachine outputs and only driving
the active value, I find it faster to sum up what the code is
doing. Those of us who prefer 2 process statemachines find
other value in this style too.

Cheers,
Jim
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Jim Lewis
Director of Training mailto:[email protected]
SynthWorks Design Inc. http://www.SynthWorks.com
1-503-590-4787

Expert VHDL Training for Hardware Design and Verification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
J

Jim Lewis

KJ said:
Generally I find in a state machine that the 'default' assignment that is
needed is not some specific default value but instead to simply retain the
current value which of course requires no line of code at all.

Don't forget if you are using a 1 process statemachine, in hardware
there is a subtile difference between defaulting an output and
leaving it alone. If you default an output, then your resulting
hardware has combinational logic that feeds into only the D input
of a register.

On the other hand, if you do not assign to a signal under
some conditions, then you are implying a hardware hold condition.
This results in some logic feeding into a D input of a register
and some feeds into the load enable (hold) of the register -
then logic reduction happens.

It would be interesting to see if there were generalizations that
could be made about which is the faster and/or smaller in hardware.
I suspect that any consistent style will do good and that randomly
mixing styles may result in much larger and slower hardware.

Cheers,
Jim



--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Jim Lewis
Director of Training mailto:[email protected]
SynthWorks Design Inc. http://www.SynthWorks.com
1-503-590-4787

Expert VHDL Training for Hardware Design and Verification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
K

KJ

Jim Lewis said:
Don't forget if you are using a 1 process statemachine, in hardware
there is a subtile difference between defaulting an output and
leaving it alone. If you default an output, then your resulting
hardware has combinational logic that feeds into only the D input
of a register.

But generally this is not some choice that you get to make but is dictacted
by whatever function it is you are trying to implement.
On the other hand, if you do not assign to a signal under
some conditions, then you are implying a hardware hold condition.
This results in some logic feeding into a D input of a register
and some feeds into the load enable (hold) of the register -
then logic reduction happens.

It would be interesting to see if there were generalizations that
could be made about which is the faster and/or smaller in hardware.
I suspect that any consistent style will do good and that randomly
mixing styles may result in much larger and slower hardware.

Assuming that the routing resources to the 'D' input and the 'Clock Enable'
input of a flip flop were comparable (years ago, like in the Xilinx 3K
series days, that was not the case) I would suspect that you might be able
to get better performance if the clock enable is used.

The reasoning being that there is a 'cone of logic' leading up to the 'D'
input and another 'cone of logic' leading up to the 'Clock Enable' input.
If everything instead were forced to go through one 'cone' leading into the
'D' input than theoretically this one single cone that does not take
advantage of the clock enable input should be as least as deep as it was
when the clock enable was used or perhaps deeper now that the clock enable
path logic has been smushed in. In any case, it won't be any shorter.

In actual practice I would expect the synthesis tools to know which is the
better way to implement when targetting a specific device since that's what
they get paid to do. I would expect that decision to be both a function of
the specific user logic that is to be implemented along with knowledge of
the target technology.

In any case, I don't see that you would ever 'randomly mix styles'....since
defaulting a signal to a value is functionally different than leaving it
alone, you would indirectly 'choose' the style by how your code is written
which will ultimately be determined by what function it is you're trying to
implement and will often not really have a choice to make.

KJ
 
K

KJ

Jim Lewis said:
However, for statemachines, I default all outputs to their
non-driving value (as you show in the first process)
and then only assign a vlaue to an output when it is the
driving value. If used consistently, this will make your
statemachine code shorter (50% or more).

Not following how the two process code would be any shorter, let alone 50%
(I assume you mean as compared to the equivalent one process approach).
Could you provide an example?
As long as one or the other coding style is used consistently
(ie: not mixed together), the code is readable.

Agreed, two process can be just as readable as with one process....although,
I believe you'll have more to read with the two process approach not 50%
less.
Those of us who prefer 2 process statemachines find
other value in this style too.

Care to expand on what those might be? I've been trying to find anyone who
can explain 'what' they think is actually better about the two process
approach that is an actual tangible. Like how it would speed up the design
process, create fewer errors or latent bugs....anything that one could
relate back to actual design productivity or quality. Obviously I'm in the
'one process' group, but always like to hear and discuss alternatives to
maybe learn something new.

KJ
 
A

Andy

I seriously doubt an arbitrary consistent style will consistently
result in hardware that is smaller and/or faster (or larger and/or
slower) than an inconsistent style, unless you consistently pick the
same style that the synthesis tool prefers (if any). Synplify seems
not to care...

I often combine timers/counters in the same process with an fsm. In
doing so, I do not usually set a signal that resets/reloads the counter
from the fsm, I just reset/reload it from the fsm directly:

if rising_edge(clk) then

-- default action of timer: count down and stop at zero
if timer - 1 < 0 then -- natural timer, don't try this with unsigned or
slv!
timeout := true;
else
timeout := false;
timer := timer - 1;
end if;

case state is
when wait_on_gotta_go=>
if gotta_go then
go <= true;
timer := delay_time; -- start timer
state := wait_on_delay;
end if;
when wait_on_delay =>
if timeout then
go <= false;
timer <= idle_time; -- start timer
state := idle;
end if;
when idle=>
...
end case;

end if;

I have compared the results of this type of style with other, more
conventionally partitioned styles, and seen no difference, at least
with Synplify.

Andy
 
M

Mike Treseler

Andy said:
I have compared the results of this type of style with other, more
conventionally partitioned styles, and seen no difference, at least
with Synplify.

I can add leo, quartus and ISE to your list.
I would exclude synopsis for its poor coverage
of vhdl.

For this set of tools, the style of equivalent
logic descriptions makes no significant difference
to the quality of synthesis.

Style does make a difference to the ease of reading
and maintenance of synthesis and test code.

-- Mike Treseler
 
J

Jim Lewis

KJ,
But generally this is not some choice that you get to make but is dictacted
by whatever function it is you are trying to implement.

True - except in statemachines where you do have the freedom
to do something in a clean organized fashion or otherwise.

Assuming that the routing resources to the 'D' input and the 'Clock Enable'
input of a flip flop were comparable (years ago, like in the Xilinx 3K
series days, that was not the case) I would suspect that you might be able
to get better performance if the clock enable is used.

The reasoning being that there is a 'cone of logic' leading up to the 'D'
input and another 'cone of logic' leading up to the 'Clock Enable' input.
If everything instead were forced to go through one 'cone' leading into the
'D' input than theoretically this one single cone that does not take
advantage of the clock enable input should be as least as deep as it was
when the clock enable was used or perhaps deeper now that the clock enable
path logic has been smushed in.
> In any case, it won't be any shorter.

By putting items in separate logic cones, they cannot be minimized together.
In actual practice I would expect the synthesis tools to know which is the
better way to implement when targetting a specific device since that's what
they get paid to do. I would expect that decision to be both a function of
the specific user logic that is to be implemented along with knowledge of
the target technology.
>
In any case, I don't see that you would ever 'randomly mix styles'....since
defaulting a signal to a value is functionally different than leaving it
alone, you would indirectly 'choose' the style by how your code is written
which will ultimately be determined by what function it is you're trying to
implement and will often not really have a choice to make.

I agree, when you default an output to a value, you are locking into a
"no clock enable" style. This is my preference.

When you don't default an output then you have the freedom to
either assign a constant value (and put logic on D),
not assign a value (hold a value and put logic on CE),
or a mixture of the two. To clarify this, the following
simple examples illustrate this for state.

Consistently putting logic on D (state only, but applies
to outputs also):


All_Logic_On_D: process(Clk, nReset)
begin
if nReset = '0' then
State <= Idle ;

elsif rising_edge(Clk) then
case State is
when idle =>
if I1 = '1' then
State <= S1 ;
else
State <= idle ;
end if ;

when S1 =>
if I1 = '1' then
State <= S2 ;
else
State <= S1 ;
end if ;
....


Consistently putting logic on D and CE:

Logic_On_D_And_CE : process(Clk, nReset)
begin
if nReset = '0' then
State <= Idle ;

elsif rising_edge(Clk) then
case State is
when idle =>
if I1 = '1' then
State <= S1 ;
end if ;

when S1 =>
if I1 = '1' then
State <= S2 ;
end if ;
....


Inconsistently putting logic on CE:

Mixed_Style : process(Clk, nReset)
begin
if nReset = '0' then
State <= Idle ;

elsif rising_edge(Clk) then
case State is
when idle =>
if I1 = '1' then
State <= S1 ;
else
State <= idle ;
end if ;

when S1 =>
if I1 = '1' then
State <= S2 ;
end if ;

when S2 =>
if I1 = '1' then
State <= S3 ;
else
State <= S2 ;
end if ;

when S3 =>
if I1 = '1' then
State <= S4 ;
end if ;
....


I agree with your statement that your coding indirectly 'choose'
the implementation. It would be interesting to see if a synthesis
tool can move logic from the CE to the D logic cone to minimize
the implementation. Anyone have evidence of this?

My conclusion is that while a favorable mixture of the two is
likely optimal, a random mixture of the two is more
likely worst case. Hence, my recommendation, don't randomly
mix the two and hence avoid coding as shown in the process
labeled Mixed_Style.

On the other hand, on many designs, the clock is slow enough
and the part has enough area, so who cares - if it functions
correctly the job is done :). Your boss never asks, "did you
get the fastest/smallest possible design?" They only ask,
"is it done yet?"

My biggest concern is readability, reviewability, and maintainability.
This is another reason I push consistency. Readability (...) improve
when everyone on a project consistently codes in a similar fashion.


Cheers,
Jim
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Jim Lewis
Director of Training mailto:[email protected]
SynthWorks Design Inc. http://www.SynthWorks.com
1-503-590-4787

Expert VHDL Training for Hardware Design and Verification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
J

Jim Lewis

Not following how the two process code would be any shorter, let alone 50%
(I assume you mean as compared to the equivalent one process approach).
Could you provide an example?

Defaulting an output does the same for one process as it does
for two process. Just taking issue with not defaulting an
output.
Agreed, two process can be just as readable as with one process....although,
I believe you'll have more to read with the two process approach not 50%
less.




Care to expand on what those might be? I've been trying to find anyone who
can explain 'what' they think is actually better about the two process
approach that is an actual tangible. Like how it would speed up the design
process, create fewer errors or latent bugs....anything that one could
relate back to actual design productivity or quality. Obviously I'm in the
'one process' group, but always like to hear and discuss alternatives to
maybe learn something new.

I think we all stick to what we know because of success.
For this reason, I use 2 process and you use 1 process.
The coding style you use is going to try to address what your
biggest hurdles have been. Is it simulation speed, is it
hardware speed/area, or is it readability/maintainability/reviewability?
What is readability? Is it most important to associate the
code with hardware or is it most important to associate the
code with the specification?

I think over time this could be an interesting discussion,
so over time, I will take you up on this. Too much other
stuff to work on right now.

Cheers,
Jim
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Jim Lewis
Director of Training mailto:[email protected]
SynthWorks Design Inc. http://www.SynthWorks.com
1-503-590-4787

Expert VHDL Training for Hardware Design and Verification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
M

Mike Treseler

Jim said:
I think we all stick to what we know because of success.

I know that if I stick with my proven design rules
the odds of new language or tool problems
is minimized. I try out design rule improvements
as a background task and hobby when my feet
are out of the fire.
For this reason, I use 2 process and you use 1 process.

I was intrigued by single process
design entities early-on, because
I had so much trouble understanding non-trivial
multi-process design entities from existing designs.
The coding style you use is going to try to address what your
biggest hurdles have been. Is it simulation speed, is it
hardware speed/area,

Not really.

or is it readability/maintainability/reviewability?

That's it.
What is readability?

It means I can understand the design intent
without running a simulation.
Is it most important to associate the
code with hardware or is it most important to associate the
code with the specification?

Specification.
That's all the boss/customer cares about.
All else is covered by design rules.
I think over time this could be an interesting discussion,
so over time, I will take you up on this. Too much other
stuff to work on right now.

That is quite often the case, but continuous
improvement is also part of the job.

-- Mike Treseler
 
K

KJ

Jim Lewis said:
KJ,

True - except in statemachines where you do have the freedom
to do something in a clean organized fashion or otherwise.
Not sure I follow your point. State machines are no different from any
other lines of code where you have the same "freedom
to do something in a clean organized fashion or otherwise" as long as you
implement the required function (which is what I meant by "dictacted by
whatever function it is you are trying to implement") and meet all
performance/area/whatever constraints.

In any case, I'll keep by statement that it is 'usually' the case that the
default in a state machine is to hold the current state not some hard coded
default value and that other assignments that happen to occur within the
state machie logic can 'usually' be factored out and work with the current
state which tends to have the side benefit of improving the
readability/maintainability of the code.

'Usually' means just that, does not mean 'always'.
By putting items in separate logic cones, they cannot be minimized
together.

Sure they can and they do. The 'clock enable' logic can be folded into the
'D' logic or left stripped out. By having the choice of either one or two
separate logic paths to minimize one should have some theoretical advantage.
Typically in an FPGA though, since the basic logic block is a 4-6 input LUT
many times there is no actual advantage if the final LUT stage happens to
have an unused input. The best way to synthesize it though is left to the
synthesizer though...shouldn't affect the code.
I agree, when you default an output to a value, you are locking into a
"no clock enable" style. This is my preference.
That's fine, and unless there is some design requirerement that's what it
is...a preference.
When you don't default an output then you have the freedom to
either assign a constant value (and put logic on D),
not assign a value (hold a value and put logic on CE),
or a mixture of the two. To clarify this, the following
simple examples illustrate this for state.

Consistently putting logic on D (state only, but applies
to outputs also):
<snip code samples>
I confess I only lightly persued the three code samples, I'm assuming that
they purport to implementing the exact same function just varying how you
write the code. Given that, here are some comments on your code.

1. What you'll find if you run this through probably any synthesis tool, is
that the final fitted output is exactly the same even though the source code
is different. Your 'All_Logic_On_D' process may very well use clock enables
and that your 'Logic_On_D_And_CE' might not use any clock enables. Whether
they do or don't is a function of how the synthesis tool chose to implement
it based on what it came up with for the final fitted equations. In any
case, you'll get the same final set of equations regardless of the input
(again, under the assumption that the three different styles really do
implement logically the same stuff).

2. I count 16 lines of code in the 'All_Logic_On_D' process (starting with
the 'if nReset = '0' then' line, 12 lines of code in the 'Logic_On_D_And_CE'
process. Even with your simple example it shows that it costs you 33% extra
in lines of code to code things in the manner of 'All_Logic_On_D' to
implement the exact same function (see point #1). Now if you get paid by
the lines of code that may be a good thing but in general every line of code
has some chance of being wrong so by having 33% extra code you have 33%
higher chance of having some lingering bug.

3. On the readability/maintainability side, both forms are roughly equal
but that is also due to the small size of the example itself. On a more
complicated design the "readability/maintainability" of the
'All_Logic_On_D' form would deteriorate sooner due to the extra, unneeded
33% extra lines of code.

4. Never use an async reset in a state machine. If the trailing edge of the
reset input is not synchronized to the clock you can end up in an
'undefined' state. If the trailing edge is synchronized to the clock then
there is no reason to not use a synchronous coding style. People like to
try to beat me up over never using async resets but I still have yet to run
across the case where it is has really been needed. In any case, even the
ones who like async resets tend to agree that they should never be used in a
state machine. This point is off topic but still worth noting.
I agree with your statement that your coding indirectly 'choose'
the implementation. It would be interesting to see if a synthesis
tool can move logic from the CE to the D logic cone to minimize
the implementation. Anyone have evidence of this?

Yes, I've seen it it makes no difference. Take your example and try it out
and you'll see it too.
My conclusion is that while a favorable mixture of the two is
likely optimal, a random mixture of the two is more
likely worst case.

Nope, they will be equivalent.
Hence, my recommendation, don't randomly
mix the two and hence avoid coding as shown in the process
labeled Mixed_Style.
Agreed, but only due to source code maintainability concerns, not function
or performance.
On the other hand, on many designs, the clock is slow enough
and the part has enough area, so who cares - if it functions
correctly the job is done :). Your boss never asks, "did you
get the fastest/smallest possible design?" They only ask,
"is it done yet?"

And you may get done sooner if you're not writing more code than is
necessary.
My biggest concern is readability, reviewability, and maintainability.
This is another reason I push consistency. Readability (...) improve
when everyone on a project consistently codes in a similar fashion.
But that 'consistent' style should also be fair game for review to
understand the costs (i.e. the extra 33% lines of code and 33% possible
extra latent bugs).

KJ
 
A

Andy

Mike said:
Jim Lewis wrote: ....

It means I can understand the design intent
without running a simulation.


Specification.

Amen!

Most synthesis tools are really good at giving us efficient hardware
that will match the simulated description, so long as we follow some
really simple rules to avoid problematic circuits (latches, improper
synchronization boundaries, etc.) By far, most of my functionally
complex work (i.e. the most complex requirements) has been in circuits
where tweaking the last few picoseconds of performance was utterly not
an issue. But behavior to the specification is always an issue.

Andy
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,535
Members
45,007
Latest member
obedient dusk

Latest Threads

Top