An implementation of a clean reset signal

E

Eli Bendersky

Hello all,

Following some discussion in these groups and reading a couple of
articles (especially the first one turning up when you google for
"asynchronous synchronous reset design techniques", it seems that the
best and the safest way to use resets in FPGA designs is to use a
special synchronizer circuit that assures that the reset of the FFs is
asserted asynchronously, but is deasserted synchronously.

The circuit is something like (kindly borrowed from the aforementioned
article):

~~~~~~~~

entity reset_synchronizer is
port
(
clk, async_reset_n: in std_logic;
sync_reset_n: out std_logic;
);
end reset_synchronizer;

architecture rtl of reset_synchronizer is
signal rff1: std_logic;
begin
process (clk, async_reset_n)
begin
if async_reset_n = '0' then
rff1 <= '0';
sync_reset_n <= '0';
elsif rising_edge(clk) then
rff1 <= '1';
sync_reset_n <= rff1;
end if
end process;
end rtl;

~~~~~~~~

When the asynchronous reset line enters this module, the synchronous
output can then be used as the _asynchronous_ reset of all FFs in the
design, as follows:

process (clk, sync_reset_n)
begin
if sync_reset_n = '0' then
ff <= '0';
elsif rising_edge(clk) then
ff ....
end if;
end process;

This seems to bring the best of all worlds - all the FFs in the design
enter reset asynchronously, without being dependent on the clock, and
exit reset synchronously. The synthesis tools can analyze the path
delays for sync_reset_n and report the times and skews.

Moreover, from a talk I had with Altera support engineers - they claim
this is the recommended way to code resets in designs, and that
Altera's tools will know to route sync_reset_n via fast global
interconnects to minimize time delay and skews. And in any case,
there's a report to see for each compilation.

What does this technique lack to be the perfect solution for resets in
FPGA designs ? It seems that it evades all the common disadvantages of
conventional sync and async resets.

Thanks in advance
Eli
 
K

KJ

Eli Bendersky said:
Hello all,


This seems to bring the best of all worlds - all the FFs in the design
enter reset asynchronously, without being dependent on the clock, and
exit reset synchronously. The synthesis tools can analyze the path
delays for sync_reset_n and report the times and skews.

Moreover, from a talk I had with Altera support engineers - they claim
this is the recommended way to code resets in designs, and that
Altera's tools will know to route sync_reset_n via fast global
interconnects to minimize time delay and skews. And in any case,
there's a report to see for each compilation.

What does this technique lack to be the perfect solution for resets in
FPGA designs ? It seems that it evades all the common disadvantages of
conventional sync and async resets.

It lacks nothing. In fact any design that does NOT generate a synchronously
timed trailing edge reset is 'lacking' and will eventually fail because at
some point that trailing edge of reset will come along at a time relative to
the clock that violates setup/hold timing requirements and cause a flip flop
or two to go to the wrong state. This is all assuming that the clock is
free running (or at least running at the trailing edge of reset).

The next question to ponder is, given that the reset must be synchronized
anyway, why use an asynchronous reset anywhere in your design (with the
exception of course of the above mentioned synchronizer)? People will
pontificate on the subject but have yet to produce any coherent reason to
prefer async or sync. Or they will claim some logic resource advantage that
when you test the claim is found to be baseless (in almost every case, the
logic and performance is darn near identical).

KJ
 
B

Benjamin Todd

Eli,

I use the technique you describe for system reset, i have a
"reset_controller" block I paste into all my designs... One case where I
have clear results showing that this implementation is more efficient than a
complete synchronous reset is in some older CPLDs (XC9500).

I assume that in these devices the implementation of an asynch reset (albeit
one that has been synchronised) comes almost for free, whereas a synchronous
one always appears to take more to be fitted.
Or they will claim some logic resource advantage that when you test the
claim is found to be baseless (in almost every case, the logic and
performance is darn near identical).

I am one of "them"... Haha

--fully asynchronous reset 16-bit down-counter mapped in an xc9536

Macrocells Used Pterms Used Registers Used Pins Used Function
Block Inputs Used
16/36 (45%) 15/180 (9%) 16/36 (45%) 18/34 (53%) 23/72 (32%)

--synchronous reset same circuit as before

Macrocells Used Pterms Used Registers Used Pins Used Function
Block Inputs Used
16/36 (45%) 31/180 (18%) 16/36 (45%) 18/34 (53%) 26/72
(37%)

I agree macrocells & registers is the same, but obviously the routing and
functions required to implement the synchronous reset is not easily achieved
in xc9500s.

Ben
 
K

KJ

Benjamin said:
Eli,


I assume that in these devices the implementation of an asynch reset (albeit
one that has been synchronised) comes almost for free, whereas a synchronous
one always appears to take more to be fitted.
Careful when you say 'always', because you'll be wrong. A single test
case does not imply 'always'. Even I said "almost every case".
I am one of "them"... Haha
Only under some conditions, not all. Granted, the condition might be
continued use of a particular part, or family of parts or even the
software that you use. Try targetting an FPGA or use different front
end synthesis software and see what the differences are.
--fully asynchronous reset 16-bit down-counter mapped in an xc9536

Macrocells Used Pterms Used Registers Used Pins Used Function
Block Inputs Used
16/36 (45%) 15/180 (9%) 16/36 (45%) 18/34 (53%) 23/72 (32%)

--synchronous reset same circuit as before

Macrocells Used Pterms Used Registers Used Pins Used Function
Block Inputs Used
16/36 (45%) 31/180 (18%) 16/36 (45%) 18/34 (53%) 26/72
(37%)

I agree macrocells & registers is the same, but obviously the routing and
functions required to implement the synchronous reset is not easily achieved
in xc9500s.
You didn't post clock cycle (i.e. performance numbers) which I'm
guessing are identical or darn near.

The reason for my "almost every case" disclaimer did have to do mainly
with CPLDs since the and/or array physical implementation of those
devices is completely different than the sea of LUTs inside an FPGA.
Since this is the 'fpga' newsgroup I didn't really mention that but I
should have. In an FPGA there is typically (but not always) a way for
the fitter to use an otherwise unused address input to the LUT in order
to implement the reset functionality. It's a bit harder (but not at
all impossible) to come across the case where the synchronous reset
will require an extra LUT and delay....but it also does not come up
very often and I've yet to see it come up in the critical timing path
(but it might).

The thing is though depending on the device the async reset path might
lead to timing issues as well that the sync reset wouldn't, so the
'best' advice is probably that your mileage will vary each has
advantages and disadvantages in different scenarios...and that
proponents of each method have a point (unlike those "two process state
machine" guys ;)

KJ
 
M

Mike Treseler

Eli said:
What does this technique lack to be the perfect solution for resets in
FPGA designs ?

This method is simple and works well.
Nothing is perfect, but this is an excellent default method.
The optimum design depends on how the external
reset pulse is generated.

For an fpga, I don't want to do the reset procedure
until the logic image is downloaded and the device
is active. If a cpu is handling the download AND
if the cpu and fpga use the same clock, a cpu
port could deliver a synchronized reset pulse
to the fpga with no other logic required.

Thanks for your well-researched posting.

-- Mike Treseler
 
J

jens

It lacks nothing. In fact any design that does NOT generate a synchronously
timed trailing edge reset is 'lacking' and will eventually fail because at
some point that trailing edge of reset will come along at a time relative to
the clock that violates setup/hold timing requirements and cause a flip flop
or two to go to the wrong state. This is all assuming that the clock is
free running (or at least running at the trailing edge of reset).

I can think of a couple of exceptions where a completely asynchronous
reset won't cause any problems:

1. A free-running counter that has no unrecoverable illegal states
(i.e. a bad state that doesn't eventually turn into a good state) and
no requirement to be immediately operational after reset.

2. An FPGA that is waiting for an external event (like a discrete input
or CPU register write) to start doing something, and the external event
happens after the reset.

I can't think of an FPGA design that I've worked on where one of those
options didn't apply. There's also a little bit of assumption with #1
and an HDL- that the compiler doesn't do anything funny, like recognize
that a simple up or down counter in the code can be replaced by an
LFSR, in which case there can be illegal states.

So for designs that meet those options, there's no need to panic and do
a massive product recall. Yet for current & future designs,
synchronizing the trailing edge of a reset signal is a great idea.
 
M

mk

The next question to ponder is, given that the reset must be synchronized
anyway, why use an asynchronous reset anywhere in your design (with the
exception of course of the above mentioned synchronizer)? People will
pontificate on the subject but have yet to produce any coherent reason to
prefer async or sync. Or they will claim some logic resource advantage that
when you test the claim is found to be baseless (in almost every case, the
logic and performance is darn near identical).

How about this: In an FPGA all the flops are already async (unlike
ASICs) so you're getting hit by the potential slow setup, clk->q issue
already so you might as well take advantage of it because going to
sync reset will need a mux at the input. In an ASIC, depending on the
library, adding the reset condition mux at the input of non-reset
flops may give you an advantage or not.
 
A

Andrew FPGA

The next question to ponder is, given that the reset must be synchronized
anyway, why use an asynchronous reset anywhere in your design (with the
exception of course of the above mentioned synchronizer)?

A purely synchronous reset requires the clock to be running for the
reset to take effect. The async initiated reset can still initiate the
reset even though the clock may have stopped (for whatever reason).

The xilinx primitives library shows async and sync reset flops - I was
under the impression there is no performance difference between the two
for current generation Xilinx FPGA families?

Regards
Andrew
 
E

Eli Bendersky

Andrew said:
A purely synchronous reset requires the clock to be running for the
reset to take effect. The async initiated reset can still initiate the
reset even though the clock may have stopped (for whatever reason).

This is an important point, which the synchronizer circuit I presented
addresses. Since the reset is asserted asynchronously (take a look at
the code), all flops will get into reset even if the clock isn't
working.
 
T

Thomas Stanka

HI,

Eli said:
best and the safest way to use resets in FPGA designs is to use a
special synchronizer circuit that assures that the reset of the FFs is
asserted asynchronously, but is deasserted synchronously. [..]
What does this technique lack to be the perfect solution for resets in
FPGA designs ? It seems that it evades all the common disadvantages of
conventional sync and async resets.

I use this style, too. But this has a few (minor) disadvantages:
- It needs additional logic
- you need a good buffering as your signal has to be distributed to
every FF in the design within one clock cycle.
- it requires the possibility to drive the asynch. reset out of a FF.
This is normaly no problem, but if you use a dedicated clock routing to
have a fast distributed reset, you _might_ encounter a problem on
exotic fpgas.
- You have some work to do, if your design uses multiple clock domains.

- You have to think about scanchains (no problem for pure fpga designs,
but a bit harder for designs targeting asic and fpga)
- the design needs at least one additional clock cycle to recover from
reset.

bye Thomas
 
B

Ben Jones

How about this: In an FPGA all the flops are already async (unlike
ASICs) so you're getting hit by the potential slow setup, clk->q issue
already so you might as well take advantage of it because going to
sync reset will need a mux at the input.

This may have been true in the early '90s, but it's not true today. Any
modern FPGA will have a dedicated set/reset input on every storage element
that is configurable between synchronous and asynchronous operation. There
is no size or speed penalty of using synchronous reset semantics in such a
device.

There are a very few believable arguments for using asynchronous resets on
certain I/O elements, depending on one's design and board-level
requirements. I have never heard a credible excuse for using an async reset
for any internal logic. I'm racking my brains to come up with some
clock-domain-crossing circuit that needs one, but I still think the correct
design practice in that case is to re-synchronize the reset into each clock
domain separately...

Cheers,

-Ben-
 
M

Martin Thompson

Ben Jones said:
I have never heard a credible excuse for using an async reset
for any internal logic. I'm racking my brains to come up with some
clock-domain-crossing circuit that needs one

Hi Ben,

Does the flancter count? XCELL37, which doesn't seem to be on the
Xilinx site anymore... and Google isn't helping me find the original
place I saw it :-(

Anwyay, it's the only thing in my latest design that has one...

Cheers,
Martin
 
K

KJ

jens said:
I can think of a couple of exceptions where a completely asynchronous
reset won't cause any problems:

1. A free-running counter that has no unrecoverable illegal states
(i.e. a bad state that doesn't eventually turn into a good state) and
no requirement to be immediately operational after reset.
That's a pretty good description of something that does not require a reset,
whether synchronous or asynchronous. My implicit assumption is that there
is a functional requirement for a reset behaviour, bringing reset (or any
other signal) in to something that doesn't have a functional requirement for
that signal isn't a good practice either.
2. An FPGA that is waiting for an external event (like a discrete input
or CPU register write) to start doing something, and the external event
happens after the reset.
I guess I'm assuming here that this is not really another case of #1 where
reset is not really needed and that there is some implied handshaking or
feedback path of some sort between the CPU and your design. If we're
talking about resetting some registers that the CPU can write to but the CPU
code doesn't actually take advantage of that behaviour then I think we're
back to #1...i.e. reset is not really needed for that writable CPU port and,
upon reset, that port should really do nothing so again that port is
independent of reset...sync or async reset is then a moot point.

I'd also point out though, that if that writable CPU port is not spec'ed to
say that it should do something upon reset then one should not reset it. If
it is spec'ed to do something then eventually that port will not reset
properly if the async reset is not properly handled and fail. Whether or
not that failure shows up as an observable symptom of the system though is
dependent on the rest of the system. For example, if the guy who wrote the
code for the CPU took the defensive approach and did not rely on the spec'ed
reset behaviour of that port but instead explicitly wrote to it himself then
the failure would be masked.

Assuming though that the two do form a feedback loop then....

In isolation you can get away with this as long as the guy who wrote the
code for that 'external event that
happens after the reset' doesn't do the same thing .

Generally speaking if the two things (i.e. your design and the external
thing) form any sort of handshake protocol and therefore form a feedback
loop then each one must either...
1. Properly handle async inputs like reset
2. Depend on an implicit assumption that the 'other guy' will properly
handle async inputs like reset so you don't have to.
3. Eventually fail if given enough time.

Depending on the other guy's design to help make your design work is what is
generally considered a 'latent bug'. While your design works in the
environment that it was originally intended for, when moved to a different
environment (i.e. one where the other guy didn't handle things properly and
therefore also has a latent bug on the interface to your design) could cause
problems down the road for that design.

Based on that I'll retract somewhat my statement about "will eventually
fail" since in certain applications where the 'other guys' reset behaviour
is assumed and that assumption happens to hold then that overall design will
not fail.
I can't think of an FPGA design that I've worked on where one of those
options didn't apply. There's also a little bit of assumption with #1
and an HDL- that the compiler doesn't do anything funny, like recognize
that a simple up or down counter in the code can be replaced by an
LFSR, in which case there can be illegal states.
Interesting. Certainly #1 applies to certain areas...but as I stated those
are actually areas that are independent of reset so I think the discussion
is moot. As for #2, taking advantage of this opens up your design to
assumptions that make your design more reliant on some outside design than
it really should be.
So for designs that meet those options, there's no need to panic and do
a massive product recall.
Agreed, didn't mean to imply that it should lead to recall either.
Depending on what the actual application is though it might lead to proving
that instances os #2 are truly 'latent' and, by virtue of how it is being
currently used in this specific application, can not happen.
Yet for current & future designs,
synchronizing the trailing edge of a reset signal is a great idea.
Yep.

KJ
 
K

KJ

Andrew FPGA said:
A purely synchronous reset requires the clock to be running for the
reset to take effect. The async initiated reset can still initiate the
reset even though the clock may have stopped (for whatever reason).
Not quite. If you can't count on the clock being running then I would have
the first flop of the synchronizer be asynchronously resetable to capture
that event...but that's about it. Well actually, since you're dependent on
the startup behaviour of the clock itself then there might be a small shift
chain of async resetable flops just so that if the clock startup is squirly
you can guarantee that the rest of the design gets at least a one clock
cycle reset. That way once the clock does start up the rest of the design
would get reset properly.

The clock not running is the common rebuttal that keeps coming up but if the
clock isn't running then just what behaviour DO you expect out of that part?
If you take a step back and realize that you probably shouldn't expect
anything useful out of a part that is not receiving the proper inputs yet
(perhaps by design, after all it could be a power saving measure) then the
outputs that do not actually reset themselves until the clock does start up
is not really an issue.

There probably are applications where the output of a flip flop in a part
really does need to respond in the absence of a clock but even after
prodding I haven't been heard of just what exactly that application is. In
any case, the mere absence of a clock at the time of reset being asserted
does not imply that, once the clock does arrive that things won't reset
properly.

KJ
 
J

jens

1. A free-running counter that has no unrecoverable illegal states
That's a pretty good description of something that does not require a reset,
whether synchronous or asynchronous. My implicit assumption is that there
is a functional requirement for a reset behaviour, bringing reset (or any
other signal) in to something that doesn't have a functional requirement for
that signal isn't a good practice either.

In many cases, that's true. However, a neat way to reset a
maximum-length LFSR is to reset (or set) only one flip-flop, then
there's no need for illegal state recovery. That's useful when
implementing the reset is expensive (either in terms or routing or
logic), or when the illegal state recovery is expensive (of course a
carefully designed LFSR can use shared logic between illegal state
recovery and end of sequence detection). Note that assumes you're
either not concerned about an SEU
(http://en.wikipedia.org/wiki/Single-event_upset) or have another way
of recovering.
I guess I'm assuming here that this is not really another case of #1 where
reset is not really needed and that there is some implied handshaking or
feedback path of some sort between the CPU and your design. If we're
talking about resetting some registers that the CPU can write to but the CPU
code doesn't actually take advantage of that behaviour then I think we're
back to #1...i.e. reset is not really needed for that writable CPU port and,
upon reset, that port should really do nothing so again that port is
independent of reset...sync or async reset is then a moot point.

I'd also point out though, that if that writable CPU port is not spec'ed to
say that it should do something upon reset then one should not reset it. If
it is spec'ed to do something then eventually that port will not reset
properly if the async reset is not properly handled and fail.

A simple example of where a reset (of any kind) is needed is a
CPU-controlled FPGA output, where you don't want it to turn on unless
told to by the CPU (especially during the time between reset and when
the CPU has the first opportunity to write to it). As long as the CPU
isn't trying to write to the register during or immediately after
reset, there won't be any problems, regardless of what type of reset is
used.

However, the best design doesn't make assumptions about what is
happening elsewhere, but sometimes compromises can be made to reduce
cost or design time (as long as those compromises are made wisely). I
have a theorem: 1 hour at the desk = 1 day in the lab = 1 week in the
field. However, it seems that many managers don't want to spend an
extra hour now and take the risk of a week + travel (and/or
reprogram/recall) later.
 
K

KJ

jens said:
In many cases, that's true. However, a neat way to reset a
maximum-length LFSR is to reset (or set) only one flip-flop, then
there's no need for illegal state recovery. That's useful when
implementing the reset is expensive (either in terms or routing or
logic), or when the illegal state recovery is expensive (of course a
carefully designed LFSR can use shared logic between illegal state
recovery and end of sequence detection). Note that assumes you're
either not concerned about an SEU
(http://en.wikipedia.org/wiki/Single-event_upset) or have another way
of recovering.
OK, but the context of the examples that you presented was...
"I can think of a couple of exceptions where a completely asynchronous
reset won't cause any problems"

I interpreted that (possibly incorrectly) to mean that the reset signal
was completely asynchronous to the clock....which would then include
the all important trailing edge of reset. If the trailing edge of
reset is not sync'ed to the clock then even that one bit in the LFSR
that you are setting could end up getting reset at the trailing edge,
if that trailing edg of reset violates a "reset inactive to clock
recovery" time specification that would be applicable. The same would
apply to your next example below as well.

If you didn't mean that the trailing edge of reset was asynchronous to
the clock then I still don't see any inherent advantage of the async
over the sync in this instance until you start considering actual
devices where one might have an advantage over the other by virtue of
"that's the way they designed the chip"
A simple example of where a reset (of any kind) is needed is a
CPU-controlled FPGA output, where you don't want it to turn on unless
told to by the CPU (especially during the time between reset and when
the CPU has the first opportunity to write to it). As long as the CPU
isn't trying to write to the register during or immediately after
reset, there won't be any problems, regardless of what type of reset is
used.
Same comments apply here as before.
However, the best design doesn't make assumptions about what is
happening elsewhere, but sometimes compromises can be made to reduce
cost or design time (as long as those compromises are made wisely). I
have a theorem: 1 hour at the desk = 1 day in the lab = 1 week in the
field. However, it seems that many managers don't want to spend an
extra hour now and take the risk of a week + travel (and/or
reprogram/recall) later.
That's why it can be good to keep secrets from the boss at times ;)

KJ
 
B

Ben Jones

Does the flancter count? XCELL37, which doesn't seem to be on the
Xilinx site anymore... and Google isn't helping me find the original
place I saw it :-(

I have to confess, I've never heard of it.

Then again, I don't do much flancting in most of my designs. :)

-Ben-
 
M

Mike Treseler

KJ said:
That's why it can be good to keep secrets from the boss at times ;)

Then there's the the Homer Simpson method:

"Great idea boss, let me write that in my notebook."

It is really up to the designer to get it right
whatever the obstacles may be.

-- Mike Treseler
 
A

Analog_Guy

Eli Bendersky wrote:

What does this technique lack to be the perfect solution for resets in
FPGA designs ? It seems that it evades all the common disadvantages of
conventional sync and async resets.
The only thing I see that this technique lacks is the ability to filter
(for noise, glitches)
the incoming reset signal. This approach can filter on the LO-HI
transition of reset,
but not on the HI-LO assertion.

So, if there is any noise or glitching on the reset input resulting in
a HI-LO transition, all
logic in the FPGA is instantly reset (i.e. asynchronous reset). Most
designs I work with
use some form of analog circuitry to provide the main reset to the
FPGA.

I do like the fact that reset will be applied even in the absence of a
clock. However, I have not yet implemented this technique because I am
not sure how to provide filtering on the HI-LO transition of the input
reset signal without requiring a clock. Can anyone help with this?
What are your ideas?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,899
Latest member
RodneyMcAu

Latest Threads

Top