I'd rather switch than fight!

P

Patrick Maupin

And much easier to avoid in the first place too...with the correct
methodology (hint:  that would be the one that avoids using unclocked
processes)

Just having fun...like I said, every now and then it's good sport to
poke fun at the people who make their own problems.

But, the reason I was unclear to start with, is that it's been so long
since I've had an unintended latch (probably several years) that I
really don't think that hard about it at all. So you can think what
you want, but the *reason* I code the way I do isn't really that much
about latches at all (obviously, if I was *worried* about them, I
would code everything in sequential blocks, where they could never
happen, but I could have some other hard to find logic problems, which
I *have* had in the past). However, unintended latches just don't
happen for me with my coding style, so I don't worry about it until
somebody like you comes along to tell me about all the problems that
I'm causing for myself that I never knew I had!

Regards,
Pat
 
C

Chris Higgs

Ah well...as long as there are textbooks I guess there will always be
those disciples that think that separating combinatorial logic from
the register description actually is of some value to somebody,
somewhere at some point in time...but inevitably they come up short
when trying to demonstrate that value.

http://www.gaisler.com/doc/structdes.pdf

I'd recommend casting your eye over this presentation. It details some
of the advantages of the "2 process" coding style with a real world
example (LEON SPARC-V8 processor).

Thanks,

Chris
 
B

Bernd Paysan

Andy said:
Other than twice the declarations, unintentional latches, explicitly
coding clock enables, simulation penalties, etc., using separate
combinatorial and sequential blocks is just fine.

LoL. Note that there are further difficulties to understand this separated
code due to the fact that things which conceptually belong together are
spread apart over the file. This is just too messy.
Most designers here use single clocked processes / always blocks.
Those that don't are 'encouraged' to.

I'm fully on your side.
 
M

Marcus Harnisch

Chris Higgs said:
I'd recommend casting your eye over this presentation. It details some
of the advantages of the "2 process" coding style with a real world
example (LEON SPARC-V8 processor).

Actually the rpesentation seems to compare two entirely different
designs done using different approaches. In this comparison, the
Two-Process-style appears to only *one* of the aspects.

--
Marcus

note that "property" can also be used as syntactic sugar to reference
a property, breaking the clean design of verilog; [...]

(seen on http://www.veripool.com/verilog-mode_news.html)
 
K

KJ

But, the reason I was unclear to start with, is that it's been so long
since I've had an unintended latch (probably several years) that I
really don't think that hard about it at all.

The two process people generally do fall back on excuses about being
misunderstood.
 So you can think what
you want, but the *reason* I code the way I do isn't really that much
about latches at all (obviously, if I was *worried* about them, I
would code everything in sequential blocks, where they could never
happen,

Hmmm...so you prefer to take what you admit as unnecessary
chances....fair enough, that implies though that you expect some
actual benefit from that decision...but are probably unable to come up
with an actual example demonstrating that benefit.
but I could have some other hard to find logic problems, which
I *have* had in the past).  

Ahhh....one of those examples...now what sort of 'hard to find' logic
problem would you like to offer up to to the group to actually
demonstrate that two processes are better than one? I'm willing to
listen, but I'll warn that you that every time in the past that this
debate pops up, the two process people are unable to coherently
describe anything other than vague generalities as you've done
here...so here is your opportunity to present a clear example
describing a 'hard to find' logic problem that is easier to find when
coded with two processes. The clocked process folks (i.e. the one
process camp) have in the past presented actual examples to back their
claims, Googling for 'two process' in the groups should provide some
good cases.
However, unintended latches just don't
happen for me with my coding style, so I don't worry about it until
somebody like you comes along to tell me about all the problems that
I'm causing for myself that I never knew I had!

Whether you in particular have this problem, I don't know (apparently
not). You in particular though are
- Doing more work (two process will always be more typing and lines of
code then one process),
- Producing less maintainable code (two process will always physically
separate related things based only on whether the logic signal is a
'register' or not)

Kevin Jennings
 
K

KJ

http://www.gaisler.com/doc/structdes.pdf

I'd recommend casting your eye over this presentation. It details some
of the advantages of the "2 process" coding style with a real world
example (LEON SPARC-V8 processor).

OK, but it doesn't compare it to the 'one process' approach, the
comparison is to a 'traditional method'. The 'traditional method'
though is all about lumping all signals into records (refer to the
'Benefits' area of that document). All of the comparisons are between
'traditional method' which has discrete signals and 'two-process
method' which lumps signals into a record.

There is absolutely nothing comparing 'one process' and 'two
process'. Why the author chose to title his new method the 'two-
process' method is totally unclear...just another example either of
sloppy academic thinking to name your method after something that is
not even relevant to the point of the paper.

The author does mention "No distinction between sequential and comb.
signals" as being a "Problem". Maybe it's a problem for the author,
but it's somewhat irrelevant for anyone skilled in design. The author
presents no reason for why having immediate knowledge of whether a
signal comes out of a flip flop or a gate is relevant...(hint: it's
not). What is relevant is the logic that the signal represents and
whether it is implemented properly or not. Whether 'proper
implementation' means the signal is a flop or not is of very little
concern (one exception being when you're generating gated
clocks...which is a different can of worms).

Even the author's 'State machine' example demonstrates the flaw of
using two processes. Referring to slide #27 partially shown below,
note that (the undeclared) variable v has no default assignment, v
will result in a latch.

begin
comb : process(...., r)
begin
case r.state is
when first =>
if cond0 then v.state := second; end if;
when second =>
....

This to me demonstrates that the author didn't even take the time to
compile his own code...and not notice the (possibly) unintended latch
that gets generated. Maybe the author is living in an ASIC world
where latches are not a problem, who knows? But in an FPGA, a latch
most certainly is indicative of a design problem more times than not.

You seem to have been caught up by his statement "A synchronous design
can be abstracted into two separate parts; a combinational and a
sequential" and the slide titled "Abstraction of digital logic" and
thought that this was somehow relevant to the point of his
paper...it's not...his point is all about combining signals into
records...totally different discussion.

The only conclusion to draw from this paper is that you shouldn't
believe everything you read...and you shouldn't accept statements that
do not stand up to scrutiny.

Kevin Jennings
 
C

Chris Higgs

OK, but it doesn't compare it to the 'one process' approach, the
comparison is to a 'traditional method'.  The 'traditional method'
though is all about lumping all signals into records (refer to the
'Benefits' area of that document).  All of the comparisons are between
'traditional method' which has discrete signals and 'two-process
method' which lumps signals into a record.

I think one of the points (implicitly) made by the paper is an
admission that the two-process method is a mess unless you use
records. I think it's also implied that 'traditional method' people
are more prone to using discrete signals rather than record types.
The author does mention "No distinction between sequential and comb.
signals" as being a "Problem".  Maybe it's a problem for the author,
but it's somewhat irrelevant for anyone skilled in design.  The author
presents no reason for why having immediate knowledge of whether a
signal comes out of a flip flop or a gate is relevant...(hint:  it's
not).  What is relevant is the logic that the signal represents and
whether it is implemented properly or not.  Whether 'proper
implementation' means the signal is a flop or not is of very little
concern (one exception being when you're generating gated
clocks...which is a different can of worms).

Sometimes it's necessary to use the combinatorial signal.
Even the author's 'State machine' example demonstrates the flaw of
using two processes.  Referring to slide #27 partially shown below,
note that (the undeclared) variable v has no default assignment, v
will result in a latch.

Yes, that's just sloppy.
You seem to have been caught up by his statement "A synchronous design
can be abstracted into two separate parts; a combinational and a
sequential" and the slide titled "Abstraction of digital logic" and
thought that this was somehow relevant to the point of his
paper...it's not...his point is all about combining signals into
records...totally different discussion.

Well combining state into records makes a "two-process" technique neat
enough to be feasible. Personally I use a similar style and I find it
very clear and understandable. As an example:

entity myentity is
generic (
register_output : boolean := true
);
port (
clk : in std_ulogic;
srst : in std_ulogic;

-- Input
data : in some_type_t;

-- Output
result : out another_type_t
);
end;

architecture rtl of myentity is

type state_enum_t is (IDLE, OTHER_STATES);

type state_t is record
state : state_enum_t;
result : another_type_t;
end record;

constant idle_state : state_t := (state => IDLE,
result => invalid_result);

signal r, rin : state_t;

begin

combinatorial: process(r, srst, data)
variable v : state_t;
begin

--DEFAULTS
v := r;
v.result := invalid_result;

-- STATE MACHINE
case v.state is
when IDLE =>
null;
when OTHER_STATES =>
null;
end case;

-- RESET
if srst = '1' then
v := idle_state;
end if;

--OUTPUTS
if register_output then
result <= r.result;
else
result <= v.result;
end if;

rin <= v;
end process;

sequential : process(clk)
begin
if rising_edge(clk) then
r <= rin;
end if;
end process;

end;
The only conclusion to draw from this paper is that you shouldn't
believe everything you read...and you shouldn't accept statements that
do not stand up to scrutiny.

You can only use sequential processes and make it impossible to infer
a latch but lose the ability to use a combinatorially derived signal.
Alternatively you can use a two-process technique which allows
intermediate/derived signals to be used but accept the risk that bad
code will introduce latches. We can ague forever about which method is
more 'correct' but it's unlikely to boil down to anything other than
personal preference.

Thanks,

Chris
 
A

Andy

Coding clock enables in a combinatorial process requires an additional
assignment for the clock-disable case (otherwise you get a latch,
regardless of your naming convention). Only one assignment is required
(the enabled assignment) in a clocked process, and it is the
assignment you had to make anyway.

KJ has already well stated the problems with latches (requiring
additional assignments) in combinatorial processes/blocks, regardless
of the naming convention employed.

Any decent simulator (maybe not a half-decent one) will merge
processes or always blocks that share the same sensitivity list. Since
they are usually identical for all synchronous processes clocked by
the same clock, they get merged, thus improving performance by
avoiding duplicative process-related overhead. Since combinatorial
processes rarely share the same sensitivity list, they don't get
merged, and performance suffers.

Andy
 
P

Patrick Maupin

Coding clock enables in a combinatorial process requires an additional
assignment for the clock-disable case (otherwise you get a latch,
regardless of your naming convention). Only one assignment is required
(the enabled assignment) in a clocked process, and it is the
assignment you had to make anyway.

Oh, I see the point you're trying to make. Two points I tried (and
obviously failed) to make are that (1) I don't mind extra typing,
because it's really all about readability (obviously, from the
discussion, my opinion of what is readable may differ from others);
and (2) With the canonical two process technique, the sequential
process becomes boilerplate (even to the point of being able to be
generated by a script or editor macro, in most cases) that just
assigns a bunch of 'x <= next_x' statements. The top of the
combinatorial process becomes boilerplate as well, with corresponding
'next_x = x' statements (for some variables, it could be other things,
e.g. 'next_x = 0'. But you can just glance at those and not think
about it. So, when reading, you aren't really looking at that, or the
register declarations.

Once you accept that the sequential block, and the top of the
combinatorial block, are both boilerplate that you don't even need to
look at, then it's no more work than anything else. (In fact, if you
can type faster than 20 wpm and/or know how to write scripts or editor
macros, it's less work overall.)
KJ has already well stated the problems with latches (requiring
additional assignments) in combinatorial processes/blocks, regardless
of the naming convention employed.

I understand the issue with latches. I just never see them. The
coding style makes it easy to check and avoid them. It can even be
completely automatic if you have a script write your boilerplate.
Any decent simulator (maybe not a half-decent one) will merge
processes or always blocks that share the same sensitivity list. Since
they are usually identical for all synchronous processes clocked by
the same clock, they get merged, thus improving performance by
avoiding duplicative process-related overhead. Since combinatorial
processes rarely share the same sensitivity list, they don't get
merged, and performance suffers.

I'm pretty sure that verilator is smart enough to figure all this
out. That's the simulator I use if I care about execution time.

Regards,
Pat
 
P

Patrick Maupin

The two process people generally do fall back on excuses about being
misunderstood.

Well, I'm not going to generalize about "one process people" but at
least some of them are supercilious bastards who think that anybody
who doesn't do things their way is an idiot. BTW, this is the last
post I'm going to reply to you on, so feel free to have fun with more
piling on.
Hmmm...so you prefer to take what you admit as unnecessary
chances....fair enough, that implies though that you expect some
actual benefit from that decision...but are probably unable to come up
with an actual example demonstrating that benefit.

Well, I haven't used the single process style in many years, so no, I
can't point you directly to the issues I had that led me to switch.
But I have helped others to switch over they years, and they have all
been grateful. In any case, I posted elsewhere in this thread a
pointer to Cliff Cumming's paper on blocking vs non-blocking
assignments. I assume you've been studiously avoiding that for
plausible deniability, so here it is: http://www.sunburst-design.com/papers/CummingsSNUG2000SJ_NBA.pdf

If you read this paper carefully, you will come to an understanding
that, while it is often possible to follow it using a single process
method, in many cases, in order to follow it, you will have to assign
related variables from *different* sequential processes (one blocking
sequential process, one non-blocking sequential process). With the
sequential/combinatorial process method, using the boilerplate for
sequential processes is actually very easy and requires little
thinking, and allows you to do all the real work of coding on related
variables within the same combinatorial process, with a sequential
boilerplate process. You can concentrate all your hard thinking on
the problem at hand, in the non-boilerplate code in the combinatorial
process.
Ahhh....one of those examples...now what sort of 'hard to find' logic
problem would you like to offer up to to the group to actually
demonstrate that two processes are better than one?  I'm willing to
listen, but I'll warn that you that every time in the past that this
debate pops up, the two process people are unable to coherently
describe anything other than vague generalities as you've done
here...so here is your opportunity to present a clear example
describing a 'hard to find' logic problem that is easier to find when
coded with two processes.  The clocked process folks (i.e. the one
process camp) have in the past presented actual examples to back their
claims, Googling for 'two process' in the groups should provide some
good cases.

That's because you're *not* really willing to listen. If you were,
you would have heard, from me, anyway, loud and clear, that it's not
really about the *language constructs*, it's about how much people can
hold in their heads at a single time. The two process method reduces
the need for that, so even if I presented an example where it helped
me in my thinking, you would just superciliously explain how any idiot
should have seen the error in the one process method, so it doesn't
prove anything. Since you're the smartest asshole in the world, the
two process method couldn't possibly offer any benefits you would be
interested in, so just forget about it.
- Producing less maintainable code (two process will always physically
separate related things based only on whether the logic signal is a
'register' or not)

See, this is another place where you haven't listened. What don't you
understand about 'boilerplate'? It's a tiny bit of overhead, not
really part of what you need to worry about in maintenance. It is
easily checked and even automated.

Regards,
Pat
 
A

Andy

You seem to put a lot of stock in the effortlessness of boilerplate,
yet you prefer a language that is said to reduce the need for
boilerplate.

OK, so you mention that you could write a script to automate all of
that, but to work, it would depend on a specific non-standard, non-
enforceable naming convention. Not to mention this script has yet to
be written and offered to the public, for free or fee. Which means
each of those who would follow your advice must write, test, run and
maintain their own scripts (or maybe even sell it to the rest of us,
if they felt there was a market).

Alas, we have no such scripts. So that would put most users back at
typing out all that boilerplate. Once it is typed, there is no
compiler to check it for you (unlike much of the boilerplate often
attributed to vhdl).

You recommend all of this extra work, rather than writing just one
process, with none of your boilerplate (no additional process, no
additional declarations, no additional assignments, no chances for
latches, no simulation-slowing effects).

What's really silly is how the two-process code model even got
started. The original synthesis tools could not infer registers, so
you had to instantiate them separately from your combinatorial code.
Once the tools progressed, and could infer registers, the least impact
to the existing coding style (and tools) was simply to replace the
code that instantiated registers with code that inferred them, still
separating that code from the logic code.

Finally, someone (God bless them!) figured out how to do both logic
and registers from one process with less code, boilerplate or not.

For all your staunch support of this archaic coding style, we still
have not seen any examples of why a single process style did not work
for you. Instead of telling me why the boilerplate's not as bad as I
think it is, tell me why it is better than no boilerplate in the first
place.

Andy
 
J

Jonathan Bromley

In any case, I posted elsewhere in this thread a
pointer to Cliff Cumming's paper on blocking vs non-blocking
assignments. I assume you've been studiously avoiding that for
plausible deniability

Very droll.

If you have had even half an eye on comp.lang.verilog
these past few years you will have seen a number of
posts clearly pointing out the very serious flaws in
Cliff's otherwise rather useful paper. In particular,
the "guideline" (=myth) about not using blocking
assignments in a clocked always block was long
ago exposed for the nonsense it is.

Cliff is quite rightly highly respected in the industry,
but he's no more infallible than the rest of us and
he made a serious mistake (which, to his great
discredit, he has never chosen to retract) on
that issue.

You have the freedom to choose your coding
style, as we all do. You do yourself no favours
by citing flawed "authority" as discrediting
a style that you dislike.
 
P

Patrick Maupin

If you have had even half an eye on comp.lang.verilog
these past few years you will have seen a number of
posts clearly pointing out the very serious flaws in
Cliff's otherwise rather useful paper.  In particular,
the "guideline" (=myth) about not using blocking
assignments in a clocked always block was long
ago exposed for the nonsense it is.

Well, I just did a search, and found some mild disagreements, but
nothing that I would consider a "debunking." Perhaps you could point
me to such?

FWIW, I independently learned several of the lessons in Cliff's paper,
so I find it handy to explain these lessons to others. So I really
would be interested in a valid counterpoint, but I honestly didn't see
it.

Regards,
Pat
 
P

Patrick Maupin

You seem to put a lot of stock in the effortlessness of boilerplate,
yet you prefer a language that is said to reduce the need for
boilerplate.

Not all boilerplate is created equal. In particular, some boilerplate
is, not only easy to glance at and understand, but also, and more
important, easy to code from first principles without knowing arcane
corners of the language your are coding in.
OK, so you mention that you could write a script to automate all of
that, but to work, it would depend on a specific non-standard, non-
enforceable naming convention. Not to mention this script has yet to
be written and offered to the public, for free or fee. Which means
each of those who would follow your advice must write, test, run and
maintain their own scripts (or maybe even sell it to the rest of us,
if they felt there was a market).

That's a good point. I have some languishing tools for this (because
the boilerplate is never quite bad enough to work on the tools some
more) that I should clean up and publish.
Alas, we have no such scripts. So that would put most users back at
typing out all that boilerplate. Once it is typed, there is no
compiler to check it for you (unlike much of the boilerplate often
attributed to vhdl).

Well, actually, the stock verilog tools do a pretty darn good job
these days.
What's really silly is how the two-process code model even got
started. The original synthesis tools could not infer registers, so
you had to instantiate them separately from your combinatorial code.
Once the tools progressed, and could infer registers, the least impact
to the existing coding style (and tools) was simply to replace the
code that instantiated registers with code that inferred them, still
separating that code from the logic code.

That may well be. Nonetheless, many people found the two process
method better, even before 'always @*' or the new systemverilog
'always_comb', to the point where they maintained ungodly long
sensitivity lists. Are you suggesting that none of those people were
reflective enough to try to figure out if that was the best way (for
them) to code?
Finally, someone (God bless them!) figured out how to do both logic
and registers from one process with less code, boilerplate or not.

Yes, and the most significant downside to this is that the access to
the 'before clock' and 'after clock' versions of the same signal is
implicit, and in fact, in some cases (e.g. if you use blocking
assignments) you have access to more than two different signals within
a process, all with the same name. There is no question that in many
cases this is not an issue and the one process model will work fine.
But I think most who do serious coding with the 'one process' model
will, at least occasionally, wind up having two processes (either a
separate combinatorial process, or two interrelated sequential
processes) to cope with not having an explicit delineation of 'before
clock' and 'after clock'.

At the end of the day, it is certainly desirable to have something
that looks more like the 'one process' model, but that gives explicit
access to 'previous state' and 'next state', so that complicated
combinatorial logic with interrelated variables can always be
expressed inside the same process without resorting to weird code
ordering that is done just to make sure that the synthesizer and
simulator will create the structures you want.
For all your staunch support of this archaic coding style, we still
have not seen any examples of why a single process style did not work
for you. Instead of telling me why the boilerplate's not as bad as I
think it is, tell me why it is better than no boilerplate in the first
place.

A paper I have mentioned in other posts,
http://www.sunburst-design.com/papers/CummingsSNUG2000SJ_NBA.pdf gives
some good examples why the general rule of never having blocking
assignments in sequential blocks is a good practice. I have seen some
dismiss this paper here, but haven't seen a technical analysis about
why it's wrong. The paper itself speaks to my own prior experience,
and I also feel that related variables should be processed in the same
block. When I put this preference together with the guidelines from
the paper, it turns out that a reliable, general way to achieve good
results without thinking about it too hard is to use the two process
model.

But, if you can tell me that you *always* manage to put *all* related
variables in the same sequential block, and *never* get confused about
it (and never confuse any co-workers), then, as with KJ and Bromley
and some others, you have no reason to consider the two process
model. OTOH, if you sometimes get confused, or have easily confused
co-workers, and/or find yourself using multiple sequential processes
to model variables where multiple processes have references to
variables in other processes, then you might want to consider whether
slicing related functionality into processes in this fashion is really
better than slicing the processes in a manner where you keep all the
related functional variables together in a single combinatorial
process, and simply extract out the registers into a very-well
understood model.

At the end of the day, I am willing to concede that the two process
model is, at least partly, a mental crutch. Am I a mental cripple?
In some respects, almost certainly. But on the off-chance that I am
not the only one, I tolerate a certain amount of abuse here in order
to explain to others who may also be easily confused that there are
other coding styles than the single process model.

I will also concede that the single process model can be beefed up
with things like tasks or functions (similar to what Mike Treseler has
done) to overcome some of the shortcomings. However, personally, I
don't really find that to be any better than putting combinatorial
stuff in a separate process.

Regards,
Pat
 
P

Patrick Maupin

On Apr 23, 2:09 pm, Jonathan Bromley
You have the freedom to choose your coding
style, as we all do.  You do yourself no favours
by citing flawed "authority" as discrediting
a style that you dislike.

BTW, I wasn't trying to cite "authority." I was trying to cite a
paper which I've actually read and have no current reason to disagree
with. You claim it's been debunked -- care to present a technical
citation for such a claim?

Regards,
Pat
 
P

Patrick Maupin

If you have had even half an eye on comp.lang.verilog
these past few years you will have seen a number of
posts clearly pointing out the very serious flaws in
Cliff's otherwise rather useful paper.  In particular,
the "guideline" (=myth) about not using blocking
assignments in a clocked always block was long
ago exposed for the nonsense it is.

One last note: In researching this, I found a posting by you with
rules and recommendations that I cannot disagree with:
http://groups.google.com/group/comp.lang.verilog/msg/a87ba28b6d68ecc8

I will note that, if faithfully followed, the two process model can
make it very easy to insure that none of these rules or guidelines are
broken. Finally, as I have posted elsewhere, these rules combined
with my personal preference to always update related variables inside
the same always block, sometimes make it difficult to *not* use the
two process model.

Regards,
Pat
 
K

KJ

Well, I'm not going to generalize about "one process people" but at
least some of them are supercilious bastards who think that anybody
who doesn't do things their way is an idiot.

Name calling now...sigh...
Well, I haven't used the single process style in many years, so no, I
can't point you directly to the issues I had that led me to switch.

The implication in your earlier post was that you could...perhaps in
the future you should consider not stating things that you can't
actually back up in some fashion since at least in this sub-thread
that inability has just led you to poor word choices.
But I have helped others to switch over they years, and they have all
been grateful.
OK

In any case, I posted elsewhere in this thread a
pointer to Cliff Cumming's paper on blocking vs non-blocking
assignments. I assume you've been studiously avoiding that for
plausible deniability, so here it is: http://www.sunburst-design.com/papers/CummingsSNUG2000SJ_NBA.pdf

Assuming something about other people is almost always a mistake.
I've read Cummings' post before, but not being a Verilog guy and the
issues he covers being very language specific it wasn't relevant to me
in VHDL.

You can concentrate all your hard thinking on
the problem at hand, in the non-boilerplate code in the combinatorial
process.

VHDL using clocked processes and concurrent assignments avoids all
boilerplate that is not checked by the compiler. There is no
boilerplate executable design specific code at all.
That's because you're *not* really willing to listen.

Because you didn't say anything relevant to the point of presenting an
example.
If you were,
you would have heard, from me, anyway, loud and clear, that it's not
really about the *language constructs*, it's about how much people can
hold in their heads at a single time.

I agree and will add to that it's also about how much can fit on a
screen so that it can be digested and kept in one's head...but I'll
also add again that your response here in no way is directed to what I
had asked which was "now what sort of 'hard to find' logic problem
would you like to offer up...". Perhaps you should consider trying to
make your repliess a bit more on topic rather than going on a tangent.
The two process method reduces
the need for that, so even if I presented an example where it helped
me in my thinking, you would just superciliously explain how any idiot
should have seen the error in the one process method, so it doesn't
prove anything.

Again you wrongly assume a particular reaction from me and then use
that incorrect assumption to provide yourself some personal
justification for why you do not present anything to back up your
claims...I'll leave it at that.
Since you're the smartest asshole in the world, the
two process method couldn't possibly offer any benefits you would be
interested in, so just forget about it.

More name calling and explicitly at me this time...you're
distinguishing yourself in a rather unflattering manner
See, this is another place where you haven't listened. What don't you
understand about 'boilerplate'?

Apparently you've got your responses mixed up because nowhere in our
little back and forth did we get into 'boilerplate' until you brought
it up here, you and Andy were bantering boilerplate, not you and
I...so, now who's not listening??
It's a tiny bit of overhead, not
really part of what you need to worry about in maintenance. It is
easily checked and even automated.

Also completely avoidable (at least in VHDL).
BTW, this is the last
post I'm going to reply to you on

OK, probably for the best. I won't wait for the apology for the name
calling then.
so feel free to have fun with more
piling on.

Asking for an example, is not demand for proof and certainly can't be
considered to be 'piling on'.

Kevin Jennings
 
P

Patrick Maupin

Assuming something about other people is almost always a mistake.
I've read Cummings' post before, but not being a Verilog guy and the
issues he covers being very language specific it wasn't relevant to me
in VHDL.

Well, I will apologize about one thing. In some of my earlier posts,
I restricted my replies to not include comp.lang.vhdl, but I didn't on
this sub-thread, so this "leaked" into that newsgroup without me
paying adequate attention. So I apologize for assuming that you knew
verilog and that you had seen my other posts that did not make it into
comp.lang.vhdl.

Verilog was the framing point for the one process/two process
discussions I was having. The Cummings paper adequately describes
several of the bad things that can happen in Verilog if you aren't
paying close attention (that the two process model can help
alleviate), so that is why I did not feel compelled to provide a
similar example.

As far as the rest of it goes, you had the chance in multiple posts to
step back and say to yourself that you must be misunderstanding me,
but in all cases you chose to assume I was a complete idiot. This is
certainly a distinct possibility (perhaps even a probability), but the
assumption of it starting out did not lead to a fruitful discussion,
and, in fact, it was only in your reply to my name-calling that you
have given me the chance to see how we are talking past each other.

Regards,
Pat
 
P

Patrick Maupin

I think you're being overly harsh about this issue. The paper in
question is almost 10 years old and synthesis tools came along way
since then. Also one has to take the guideline within the context it
has been defined.

FWIW, the latest rev of the paper (1.3) was September of last year.

Regards,
Pat
 
M

Martin Thompson

Chris Higgs said:
You can only use sequential processes and make it impossible to infer
a latch but lose the ability to use a combinatorially derived signal.
Alternatively you can use a two-process technique which allows
intermediate/derived signals to be used but accept the risk that bad
code will introduce latches.

Or you can use a single sequential process with variables to infer
both combinatorial logic and flip-flops, which both avoids latches and
allows the ability to use a combinatorially derived "signal" (in the
non-VHDL sense of the word).

Cheers,
Martin
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,767
Messages
2,569,572
Members
45,046
Latest member
Gavizuho

Latest Threads

Top