Rather than posting code, I'll refer to yours since it is roughly along
the lines of what I do. Instead I'll hope that my explanation is clear
enough that one can follow my reasoning (whether you agree or disagree
with it) without any more than occasional snippets of code.
First off, I don't make any religious distinction between 'state
machine' signals and 'output' signals so I wouldn't feel compelled to
have a separate process for outputs, so I might choose to simply
combine them into a single process. The advantage (IMO): generally
less code, somewhat more readable and maintainable since in many cases,
it is much easier to follow the logic that says "if x then goto this
state and set this output to this value", end of story.
Having said that though, I do tend to have multiple clocked processes.
I base what goes into each process on the somewhat fuzzy definition of
what things are in some sense 'related'. Things to me are 'related' if
I'm replicating code to implement them in separate processes. An
example would be if I have three signals A, B, C that all are of the
form "if (x = '1') then.... else.... end if;" then I would most likely
have A, B, C in a single process. Of course A, B, C being different
would have some additional logic associated uniquely with them so
within the overall "if (x = '1')...else...end if;" statement there
would be additional logic 'if', 'case', whatever that go into defining
them.
Outputs that depend on 'next' state will tend to get implemented in
with the state machine for the simple reason that they meet the
'related' criteria. Outputs that depend on current state will tend to
get implemented elsewhere because they are not 'related'. Again, no
heartburn here because I'm being pragmatic rather than dogmatic about
source code positioning, I let the relationships drive how it appears.
This tends to produce more robust code (IMO) since there tends to be
less duplicated logic that will over time start to diverge because
something changed 'up there' but forgot to be changed 'down there'. By
physically grouping related things, it is easier to see implications of
the change I'm contemplating on other related signals and whether there
is a relationship that should be maintained or severed somewhat.
I then try to balance that out with the again somewhat fuzzy term of
'readability'. A single process of 1000 lines of anything to me is too
long, I aim for it to fit on a screen....maybe one with somewhat high
resolution but that's the basic idea. Scrolling back and forth while
you're trying to understand code is not productive and is disruptive I
think.
Another criteria I use for whether things should be together in a
single process is the number of signals going in and out of that
process. I happen to really like the Modelsim 'Dataflow' window and
how it integrates with the source and wave windows so that as I'm
debugging I can immediately see the inputs that go into producing the
one signal that I'm moseying through in order to find the root cause of
whatever it is I'm debugging. The single monolithic process that has
100 inputs and 100 outputs will show up as just a large block with all
those I/O when I click on it. But if the equation is simply A <= B and
C and is implemented in a 'screen sized' process then the dataflow
window shows me that C depends on A and B and possibly a few other
inputs that might happen to coincidentally be in that process because
other signals that use them were deemed 'related' and it will jumps me
write to the correct lines of code that implement the logic (because
that process fits on a screen) where I immediately see that A depends
only on B and C. You lose all of that as you put more and more things
into a process.
If either the 'lines that fit on the screen test' or the 'number of
signals in and out test' seem to be getting out of hand (again, the
fuzzy definition) I'll revist just how 'related' these things really
are. Signals are 'more' related if separating them meant they would
share more replicated code. An example here would be simply a process
with a bunch of signals that are all clocked, but they all share a
common clock enable so they are of the form "if (Clock_Enable = '1')
then....end if;" so those signals are 'related' by my definition, they
do share the "if (Clock_Enable = '1') then....end if;" construct. But
if that's about it then I would have no heartburn about making two (or
more) processes replicating the "if (Clock_Enable = '1') then....end
if;" construct to appease the 'fitting on a screen' and 'number of I/O'
tests.
I feel free to violate the 'screen size' rule in favor of the 'related'
rule if the situation dictates and have the multi-screen process.
In spite of the multiple clocked processes I consider myself brethen of
the 'one process' state machine group because my multiple clocked
processes are just one virtual clocked process, they are not the
combinatorial 'next state' process feeding into the clocked process.
Combinatorial logic is implemented using concurrent statements outside
the process. When implemented inside the process I have to think too
hard and scroll back to remember if the usage of variable 'x' in this
particular case is the input or the output of the flop. Call me lazy
on this one if you want.
I only use variables like C language macros. In other words if I want
a shorthand way of referring to a hunk of logic, I'll define the
equation for the variable and it will almost always be right at the
very top of the process, then I'll use it wherever....and that would
only be because for some reason it wasn't looking right as a
combinatorial function concurrent statement for some reason. Variables
when added to the Modelsim wave window do not show any of the signal
history, signals do not have that limitation. If it's a 'simple'
function that is being implemented by the variable then this is not a
big deal, if the function being implemented is rather tricky then being
able to display the history can be very important. If I don't, then I
have to restart the simulation adding the variables to the wave window
before I say 'run'....wasted time...and if those variables then lead me
back to another entity with signals and different variables, the
signals I can wave, the variables...well, restart the sim again.
The drawback of signals is that take longer simulation time...wasted
time too. I'm trying to resurrect the test code that I had comparing
use of variables versus signals but I seem to remember about a 10% hit
for signals. But I still use signals because just one blown simulation
that needs to be restarted just to get the variable's history can more
than compensate for that 10%...which for someone picking up somebody
elses code can easily happen since they are not familiar with the code
to begin with to 'know' which variables to wave....in other words
'supportability'. I try to give the poor shmuck who has to pick up my
code all the help I can...even if it means they're sitting waiting for
an extra 10%
The variable people have a definite point about simulation time, but
there is really no good data to support the overall debug cycle time
being in any way better using variables. They seem to imply that they
can run 10% more test cases, but it is less than that if they were to
consider the down sides and the probabilities of them occurring (see
above about having to restart...or extra time pondering what they think
the value of the variable is in their head since they can't wave it
without restarting). They still might come out ahead using variables
(and I might too if I did that, one day I might, they do have a point).
I rarely (veeeeeeeery rarely) use combinatorial processes. In fact, I
can't remember the last time I did but I'm pretty sure at some point I
did but even there I'm pretty sure that the sensitivity list consisted
of only one or two signals.
I never have sensitivity list issues (see above paragraph).
I never have combinatorial latches (ditto).
I never use async resets with the exception of the flip flop that
receives the external reset signal that is the start of a shift chain
for developing my internal design reset.
I never have issues with some clocked signals getting cleared and
others not or going to unexpected states (see above paragraph). I have
however fixed several designs that did use asynchronous resets
inappropriately both on a board and within programmable logic.
I don't recall ever having to fix reset issues on others designs when
synchronous resets were used...hmm, well maybe I've just lived in a
narrow design world.
Even in a gated clock design I have not run across the need for the
async reset anywhere other than that first flip flop previously
mentioned. Go figure.
I prefer executable code over comments (but I certainly do appreciate
the comments).
I use the 'time' data type in synthesizable code. No seriously, I do
and for very good reason....you know, the specification we all run into
at some point that says that signal 'x' must be asserted for 2 us...and
let's see my clock is 20 ns, no problem, figure out the proper count
values and go on....then two years down the road version 2.0 with the
speedup, now we can run with a 15 ns clock....and now you have a 1.5 us
pulse....DOH!!...I don't have that problem (anymore) because I use type
'time'....shamelessly leaving a cliff hanger on this one for those that
haven't figured out how I use 'time' types in synthesizable code.
Later posts on this topic talked about automatically generated code
from using a particular form of a template. I couldn't care less what
format the auto code generator use since that will not be the 'source',
the inputs to that code generator are the source and is where I'll go
for more information. If I have to dig into auto generated code to
find a problem, I will, but somebody is going to have a newly opened
service request to answer for my troubles if I find a problem.
Templates that are intended for people to use should be done with
people in mind, not a code generator.
I love to write non-synthesizable testbench code too as well...where I
shamelessly break just about every rule I mentioned above if needed.
I have a tendency to ramble on at times.
KJ